How to find strings in a very large array of data?

Hi
I have a csv file containing a large number of numbers and a few random strings like 'zgdf'. I need to find them and set them to zero. I cannot use 'csvread' (due to strings), so I use 'textscan' to read the file.
I then turn the data to digits using str2double. MATLAB then turns the string values to NaN which is fine for me, but it takes a long time, specially because this has to be done for many similar files.
Any faster method to sort this out?
This is how I read the data (original file has two columns and large number or rows):
fileID = fopen(filename);
C = textscan(fileID,'%s %s','Delimiter',',');
fclose(fileID);
for i = 1: length (C{1})
D(i) = str2double(C{1}{i});
end
Thanks

10 Comments

Please explain how long it takes and which acceleration you need. Maybe the process is limited by disk and Matlab is not the bottleneck? If you post your current code, a matching solution is more likely. A short and meaningful example for the inputs would be useful also. Is te number of columns known in advance?
Thanks everyone.
Since I'm using textscan, the result would be a cell. Then turning them into double is the part which takes a long time (and by long I mean like 10sec, but for many files, this leads to minutes). If there was no string, I could have used csvread which is much much faster.
So this is the part which makes it longer I guess:
B is the cell generated from reading the data using textscan:
For i = 1: length (B{1})
A (i) = string2double(B{1}{i})
End
Thanks
B in your code appears to be a cell array of cell arrays. Is that the output to textscan()? What formatSpec are you using in your call to textscan()?
As Jan mentioned, a simple, abbreviated example of B and B{1} would answer many questions.
See the update in my answer. If my answer and the updated suggestion do not work, please provide more detail about B.
The original csv file has two columns:
This is How I read it:
C = textscan(fileID,'%s %s','Delimiter',',');
Both methods in my answer should work. Have you tried them?
Thanks Adam, Jan and Ridwan,
I just got the chance to try your inputs.
using your common suggestion:
A = str2double(C{1})
actually took longer (around 1sec) on my PC than the loop! which is wierd! I expected this to be less as well!
I couldn't try Adam's import code, since my old version doesn't support 'delimitedTextImportOptions' function.
Any other way I can reduce this?
1. Is there something wrong with the format of textscan that I am using?
C = textscan(fileID,'%s %s','Delimiter',',');
2. I tried to use sscanf instead of str2double, it was faster, but then it couldn't read the strings and returned [ ], which then I couldn't save in an array. So this loop fails when there is a string:
for j = 1: length (C{1})
D = sscanf(C{1}{j},'%f');
end
Thanks again
Edit: I modified the original post to include how I read the data.
If you have a known and fixed set of noise, say {"zgdf", "cvbn"}, you could have used the "TreatAsEmpty" option with textscan(). But I believe that's not the case. Sigh!
Steven
Steven on 21 Nov 2019
Edited: Steven on 21 Nov 2019
Thanks for your message.
I might have, not entirely sure.
Assuming that I know what the strings are, how do I do that in textscan?
And then when it treats them as empty, I don't think I can assign them to an array, right? How to deal with that empty cell then?
Thanks
Knowing your matlab relase is usually helpful which is why it's included as an optional field when you're forming a question in this forum.
I've confirmed that the loop method of str2double() is indeed faster than the direct application to the cell array. Sometimes loops are faster.
See method 3 in my answer which applies your sscanf idea and avoids the error you described.
See method 4 for a FEX function that is like str2double() but much faster.
Method 5 is very fast but requires r2019a.
Lastly, whenever you build a variable within a loop, always pre-allocate the variable. Not pre-allocating the variable will definitely slow down your code.
@Steven
I have updated my answer with the syntax for textscan with "TreatAsEmpty" option. It returns NaN in place of those known noisy chars. Using the ["EmptyValue",0] option will return 0 instead of NaN.
Not sure how much speed up will that do though :(

Sign in to comment.

 Accepted Answer

[This answer has been reorganized following the discussion in the comment section under the question]
Method 1
fid = fopen('myCSVfile.csv');
C = textscan(fid,'%s %s','Delimiter',',');
fclose(fid);
A = str2double(C{1}); % Faster than doing the same thing in a loop.
[update] the loop method below is actually faster
A = zeros(size(C{1})); % <--- always pre-allocate!
for i = 1:numel(C{1})
A = str2double(C{1}{i});
end
Method 2
Try this modification of the script produced by ImportData tool. Rather than importing your data and then converting it using str2double(), this imports the data as numeric and replaces non-numeric elements with NaN. I think it should be faster than your approach but I doubt it is much faster (or maybe it's not faster at all).
The only 2 variables you'll need to change to adapt to your data are
  • file (the filename, or, preferably, the full path to your file)
  • The NumerVariables value (number of columns of data)
%% Setup the Import Options and import the data
file = "C:\Users\name\Documents\MATLAB\myCSVfile.csv"; % Full path to your file (or just file name)
opts = delimitedTextImportOptions("NumVariables", 2); % Number of columns of data
opts.VariableTypes(:) = {'double'}; % read in all data as double (nan for strings)
opts.Delimiter = ",";
opts.ExtraColumnsRule = "ignore";
opts.EmptyLineRule = "read";
Data = readtable(file, opts); % Read in as table
Data = Data{:,:}; % Convert to matrix
Method 3
D = zeros(size(C{1})); % <--- pre-allocate!
for j = 1: length (C{1})
s = sscanf(C{1}{j},'%f');
if ~isempty(s)
D(j) = s;
end
end
This is 4.5x faster than method 1.
Method 4
This FEX function is designed to overcome the slow speed of str2double()
Method 5
A very fast solution is to read the data in using readmatrx() which automatically converts non-numeric elements to NaN but it requires r2019a.
file = 'myCSVfile.csv';
D = readmatrix(file); %that's it, just 2 lines

3 Comments

I also don't have readmatrix(). Time to upgrade!
Steven
Steven on 21 Nov 2019
Edited: Steven on 21 Nov 2019
Thanks Adam,
I tried on 2018b and Method 2 was much faster! Thanks.
On my PC, this is how long each took for a given file:
Method 1: 5.8 s
Method 2: 0.6 s
Method 3: 3.1 s
I couldn't check method 5 though.
Great experience!
Thanks guys
Thanks for the feedback!

Sign in to comment.

More Answers (2)

Given, the list of noise is {'a', 'b', 'ee'}:
C = cell2mat(textscan(fileID,'%f %f','Delimiter',',','TreatAsEmpty',{'a','b','ee'},'EmptyValue',0));
Try this!!
%% Old Answer
Updated using Method 1 from Adam:
C = textscan(fileID,'%s %s','Delimiter',',');
C = [str2double(C{1}) str2double(C{2})];
C(isnan(C)) = 0;

9 Comments

The problem is that the data is a non-numeric array that should be converted to a numeric matrix where non-numeric values are replaced by NaNs. The OP found that str2double() does this but is looking for a faster solution.
Thanks. As Steven mentioned "I need to find them and set them to zero", I was under the impression that running a loop to find the Nans was taking the time.
Your lines of code will definitely solve that part of the problem! :)
I assumed the rest to be trivial ;-)
Hmmmm, I tested the TreatAsEmpty idea using the attached file and didn't get expected results.
I got this:
C = cell2mat(textscan(fileID,'%f %f','Delimiter',',','TreatAsEmpty',{'sdfs','1 sec'},'EmptyValue',0));
% C =
%
% 1 0
% 2 0
% 3 2
% 0 3
% 3 0
% 0 3
% 3 0
% 3 3
% 3 3
% 3 3
% 3 3
% 3 3
% 0 3
% 3 0
% 3 3
% 3 3
% 3 3
% 3 3
% 3 3
% 3 3
Right, if you know the strings in the file ahead of time you can list them in the TreatAsEmpty value. I assume the strings are not known prior to reading in the file.
Sure, Steven. Please vote up if you liked the conversation. Thanks!

Sign in to comment.

"random strings like 'zgdf'" If that means letters of the US alphabet, this code is rather fast.
%%
chr = fileread('cssm.txt');
chr = regexprep( chr, '[A-Za-z]+', '0.0' );
cac = textscan( chr, '%f%f', 'Delimiter',',', 'CollectOutput',true );
num = cac{1};
result
>> num(1:10,:)
ans =
0.81472 0.15761
0 0.97059
0.12699 0.95717
0.91338 0.48538
0.63236 0.80028
0.09754 0.14189
0.2785 0
0.54688 0.91574
0 0.79221
0.96489 0.95949
Where cssm.txt contains
0.81472, 0.15761
abc , 0.97059
0.12699, 0.95717
0.91338, 0.48538
0.63236, 0.80028
0.09754, 0.14189
0.27850, def
0.54688, 0.91574
zgdf , 0.79221
0.96489, 0.95949
et cetera
In response to comments
See the caveat in the first line of my answer.
I fail to find a regular expression for "not a legal number" and if one exists it might not be that fast.
It's straight forward to add a few (many becomes impractical) characters, e.g. '^â', and make sure that the string is followed by comma or end of line.
>> chr = regexprep( '12.3, abc, g^â, 1.0e5, def ', '(?m)[A-Za-zâ^]+(?=\x20*\r?(,|$))', '0.0' )
chr =
'12.3, 0.0, 0.0, 1.0e5, 0.0 '
>>
Look ahead, e.g. '(?=\x20*\r?(,|$))', is reasonable fast, but look behind sometimes ruins the performance.
The above regex fails for 'def1', '1deg' and '10a'
fileread in combination with CRLF as newline character poses a problem when using regular expressions. The anchor $ doesn't recognise CRLF as newline. (Please tell me if I missed something.) The best way to avoid this problem is to replace fileread by a function that uses
[fid, msg] = fopen( filespec, 'rt' );
chr = fread( fid, inf, '*char' );

5 Comments

That is fast! Note that if any strings contain numbers ('def1' or '1deg' or '10a'), problems ensue.
What about 1.0e5 ?
Steven
Steven on 21 Nov 2019
Edited: Steven on 21 Nov 2019
Thanks Per.
Sometimes, characters include something like "g^â".
I added a response to my answer.

Sign in to comment.

Categories

Asked:

on 20 Nov 2019

Edited:

on 23 Nov 2019

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!