I just meet a problem when I use read_csv() and read.csv() to import CSV files into R. My file contains 1.7 million rows and 78 variables.Most of those variable are integers. When I use the read_csv(), some cells, which are integers, are converted into NA's and I get the following warnings. However, those cells are also integers so that I do not know why it goes wrong.
10487 parsing failures.
row col expected actual
3507 X27 an integer 2946793000
3507 X46 an integer 5246675000
3508 X8 an integer 11599000000
3508 X23 an integer 2185000000
3508 X26 an integer 2185000000.
When I access df[3507,27], it just shows NA. Also, X27,X46 and X8 are all integers so that I do not understand why the function works for most rows but does not work in those several rows.
However, when I use read.csv(). It works and returns 2946793000. Can someone tell me why these two functions behave differently here?
read_csv
looks at the first rows of your data and guesses the data type of the column. There are times when it guesses incorrectly, especially with massive datasets. For example, I had a dataset with a gender column thatreadr
thought was boolean (all the first rows were "F"). Try reading thehead
of the file up to the row where the first error occurs and seeing if there's some string formatting. You could also force it to read the offending columns as characters and then convert them to numeric. – Andrew Brēza