Bug in readtable()? - if the first values in a CSV's column are missing, the whole column is misinterpreted
Show older comments
I am reading a big CSV file (500K lines) with readtable. In the CSV, some columns have the first 250+ lines empty (e.g. ",,,," in the CSV), while the non-missing values below (pretty rare) are either text strings or dates (in the DD-MM-YYYY format). Readtable() somehow interprets these columns as numeric, and so converts all the strings and dates into NaNs - thus, I end up with 100% NaN-filled columns instead of rarely-populated data (among empty strings and NaTs).
Furthermore, if I move the "with-data" lines up - even a few dozens positions up - readtable() starts to read everything normally!
So, it looks like readtable() checks only ~250 first values to determine the type of the column, which, in my opinion, is a bug! (Although I understand that it was likely made to improve speed.)
Is there a way to fix it systematically? I have lots of such CSVs with thousands of columns in them - so, a manual check and manual fix is not an option...
UPD: a test-file (truncated to 510 lines) is attached - the behaviour is still the same. The problem columns are the 2nd and the 3rd (p190, p191). The first non-empty value is on data-line 270.
10 Comments
Mathieu NOE
on 3 Apr 2025
hello
could you share one file (or extract if size > 5Mo) so we can test ?
Konstantin
on 3 Apr 2025
"Is there a way to fix it systematically?"
Specify a DelimitedTextImportOptions object and use that for importing all of the files:
"So, it looks like readtable() checks only ~250 first values to determine the type of the column, which, in my opinion, is a bug! (Although I understand that it was likely made to improve speed.)"
You consider it a bug when READTABLE does not check all rows before deciding what class a column has.
Another hypothetical user considers it a bug when READTABLE hypothetically imports the entire file into memory (taking five minutes) just to check ten billion rows because there might be something inconsistent on the final row.
Who is right?
Perhaps TMW should add an option: "read all rows before deciding the class, even if this takes five minutes".
Konstantin
on 3 Apr 2025
"In normal usage, readtable() eventually reads everything into the memory..."
Yes, but as interpreted data (not as a bytestream like the raw file data).
"...and so full checking would not be SO slow."
It would bascially require reading the file twice OR storing the file data twice in memory.
Konstantin
on 3 Apr 2025
Steven Lord
on 3 Apr 2025
"it will drastically reduce the number of errors at a reasonable performance cost."
Unless the first non-empty value in that column is in the last row of the table, in which case it will have to read in the whole file.
If you know that your data may be sparsely populated in your file, I would recommend using the detectImportOptions approach Cris LaPierre suggested to allow readtable to take advantage of that knowledge.
Walter Roberson
on 3 Apr 2025
Note that when you use detectImportOptions() that it checks more lines than is the case when detectImportOptions() is not used.
Konstantin
on 4 Apr 2025
Jeremy Hughes
on 2 May 2025
Edited: Jeremy Hughes
on 2 May 2025
detectImportOptions doesn't actually check more lines, it just defaults to treating the variable as text instead of double when the first 250 rows are all empty. The reason for the difference is to preserve the behavior from before detectImportOptions was introduced. Before that, it only considered the first row of data for the variable types and empty fields were treated as double by default.
Answers (1)
I wouldn't call it a bug. It's an artifact of trying to autodetect the format, which will naturally not get it right every time. I would recommend specifying the import options it is not detecting correctly.
opts = detectImportOptions('TEST_ongoing.csv');
opts = setvartype(opts,["p190","p191"],["string","datetime"]);
data = readtable('TEST_ongoing.csv',opts)
data([270,508],:)
12 Comments
Konstantin
on 3 Apr 2025
Konstantin
on 3 Apr 2025
Walter Roberson
on 3 Apr 2025
Alas, I cannot manually mark up thousands of columns in the database.
What if I were to tell you that the real purpose of the column is of numeric values, with lots and lots of leading missing values followed by a number of "mistakes" ?
What if I were to tell you that the real purpose of the column is of strings?
Detection of import options cannot guess both of those possibilities simultaneously. Without a hint about which case it is, detection of import options will surely guess incorrectly sometimes.
Konstantin
on 3 Apr 2025
Cris LaPierre
on 3 Apr 2025
Alas, I cannot manually mark up thousands of columns in the database.
I made an assumption that you had thousands of files, all of them with the same format as the file you shared, just with more rows.
I'd agree that, if you have thousands of columns, this is a cumbersome approach. There may be other approaches, but in order to provide a more specific answer for your use case, we need to see an actual example of what you are working with.
Is there no definition of the(se) database(s)? How does anybody/any application know how to populate/read them otherwise? Unless they are not controlled and are just text fields that allow for mistakes which would seem to be the kind of thing @Konstantin is running into if he has columns that contain both date strings and other text (or date strings in inconsistent formats?). It seems the fundamental problem is in the database itself, not in MATLAB.
Lacking controls on the database or at least information about its column types that can be used, it would seem the only real solution would be to read the whole file as a cell array and then try to parse it.
Perhaps a workable solution would be to put the import into a try...catch...end block and if the auto-import fails then fall back to the cell import and try to deal with it programmatically and only if that fails get the human involved.
But clearly, it isn't practical for the auto-detect functionality to have to parse the whole file first as a generic implementation.
Konstantin
on 4 Apr 2025
Edited: Konstantin
on 4 Apr 2025
Cris LaPierre
on 4 Apr 2025
You have a good use case here for an enhancement. I'd suggest submitting an enhancement suggestion to MathWorks to address this need. You can do that here: https://supportcases.mathworks.com/mwsupport/s/casetypeselection?language=en_US&c__caseParameter=productusage&s_tid=srlp_product
Konstantin
on 4 Apr 2025
Konstantin
on 4 Apr 2025
Cris LaPierre
on 4 Apr 2025
Sounds like another enhancement suggestion to me.
The datetime conversion routines are pretty picky, failing if valid date/time strings aren't all the same format in a single call. This is clearly an implementation choice for performance reasons, but can become annoying.
If this can really happen and isn't just a test case, for the immediate fix seems like it may be more expeditious to read the date column as cellstr and then test for the empty case/convert the existing or use the try...catch construction to deal with the (presumed) aberration. I agree that NaT should be the result...although this seems to be an artifact of readtable and not part of datetime's repetoire...
s="";s=standardizeMissing(s,"")
datetime(s,'InputFormat',"yyyy-MMM-dd")
Another possibility I haven't tested might be able to use the 'MissingRule' and/or 'ImportErrorRule' although they are global rather than having granularity of setting a replacement value by column/data type--another enchancement I think I've previously suggested.
Categories
Find more on Data Type Conversion in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!