8
votes

The easy answer to this is "buy more RAM" but I am hoping to get a more constructive answer and learn something in the process.

I am running Windows 7 64-bit with 8GB of RAM.

I have several very large .csv.gz files (~450MB uncompressed) with the same exact header information that I read into R and perform some processing on. Then, I need to combine the processed R objects into a single master object and write back out to .csv on disk.

I do this same operation on multiple sets of files. As an example, I have 5 folders each with 6 csv.gz files in them. I need to end up with 5 master files, one for each folder.

My code looks something like the following:

for( loop through folders ){
    master.file = data.table()

    for ( loop through files ) {
        filename = list.files( ... )
        file = as.data.table ( read.csv( gzfile( filename ), stringsAsFactors = F ))
        gc()

        ...do some processing to file...

        # append file to the running master.file
        if ( nrow(master.file) == 0 ) {
            master.file = file
        } else {
            master.file = rbindlist( list( master.file, file) )
        }
        rm( file, filename )
        gc()
    }

    write.csv( master.file, unique master filename, row.names = FALSE )

    rm( master.file )
    gc()

}

This code does not work. I get the cannot allocate memory error before it writes out the final csv. I was watching resource monitor while running this code and don't understand why it would be using 8GB of RAM to do this processing. The total of all the file sizes is roughly 2.7GB, so I was expecting that the maximum memory R would use is 2.7GB. But the write.csv operation seems to use the same amount of memory as the data object you are writing, so if you have a 2.7GB object in memory and try to write it out, you would be using 5.6 GB of memory.

This apparent reality, combined with using a for loop in which memory doesn't seem to be getting adequately freed up seems to be the problem.

I suspect that I could use the sqldf package as mentioned here and here but when I set the sqldf statement equal to an R variable I ended up with the same out of memory errors.

1
The general rule is that you should have 3x the size of your largest object. (So you've already violated that rule.) Furthermore your may have 8MB of RAM but you need to subtract the RAM you use for OS and otehr applications and background utilities.IRTFM
Why don't you use write.table(yourFirstDataTable, sep = ",", file = YourFile.csv) for the first file read in and processed, and then write.table(yourOtherDataTables, sep = ",", file = YourFile.csv, append = TRUE, col.names = FALSE)?A5C1D2H2I1M1N2O1R2T1
@AnandaMahto that is an excellent suggestion! I forgot that there was an append option to write.table. write.csv disables that option but with write.table, I don't have to append the data in R, I can just append each new object to the file on disk.Brian D

1 Answers

5
votes

Update 12/23/2013 - The following solution works all in R without running out of memory (Thanks @AnandaMahto).
The major caveat with this method is that you must be absolutely sure that the files you reading in and writing out each time have exactly the same header columns, in exactly the same order, or your R processing code must ensure this since write.table does not check this for you.

for( loop through folders ){

    for ( loop through files ) {

        filename = list.files( ... )
        file = as.data.table ( read.csv( gzfile( filename ), stringsAsFactors = F ))
        gc()

        ...do some processing to file...

        # append file to the running master.file
        if ( first time through inner loop) {
            write.table(file, 
                        "masterfile.csv", 
                        sep = ",", 
                        dec = ".", 
                        qmethod = "double", 
                        row.names = "FALSE")
        } else {
            write.table(file,
                        "masterfile.csv",
                        sep = ",",
                        dec = ".",
                        qmethod = "double",
                        row.names = "FALSE",
                        append = "TRUE",
                        col.names = "FALSE")
        }
        rm( file, filename )
        gc()
    }
    gc()
}

My Initial Solution:

for( loop through folders ){

    for ( loop through files ) {
        filename = list.files( ... )
        file = as.data.table ( read.csv( gzfile( filename ), stringsAsFactors = F ))
        gc()

        ...do some processing to file...

        #write out the file
        write.csv( file, ... )
        rm( file, filename )
        gc()
    }        
    gc()
}

I then downloaded and installed GnuWin32's sed package and used Windows command line tools to append the files as follows:

copy /b *common_pattern*.csv master_file.csv

This appends together all of the individual .csv files whose names have the text pattern "common_pattern" in them, headers and all.

Then I use sed.exe to remove all but the first header line as follows:

"c:\Program Files (x86)\GnuWin32\bin\sed.exe" -i 2,${/header_pattern/d;} master_file.csv

-i tells sed to just overwrite the specified file (in-place).

2,$ tells sed to look at range from the 2nd row to the last row ($)

{/header_pattern/d;} tells sed to find all lines in the range with the text "header_pattern" in them and d delete these lines

In order to make sure this was doing what I wanted it to do, I first printed the lines I was planning to delete.

"c:\Program Files (x86)\GnuWin32\bin\sed.exe" -n 2,${/header_pattern/p;} master_file.csv

Works like a charm, I just wish I could do it all in R.