Question
Suppose you have a large ASCII text file, with a random non-negative integer on each line, each in the range from 0 to 1,000,000,000. There are 100,000,000 lines in the file. What's the fastest way to read through the file and calculate the sum of all the integers?
Constraint: we've got 10MB of RAM to work with. The file is 1GB in size, so we don't want to read the whole thing in and then process it.
Here are various solutions I've tried. I found the results rather surprising.
Is there anything faster that I've missed?
Please note: all timings given below are for running the algorithm 10 times in total (run once and discard; start timer; run 10 times; stop timer). The machine is a fairly slow Core 2 Duo.
Method 1: the natural approach
The first thing to try is the obvious approach:
private long sumLineByLine() throws NumberFormatException, IOException {
BufferedReader br = new BufferedReader(new FileReader(file));
String line;
long total = 0;
while ((line = br.readLine()) != null) {
int k = Integer.parseInt(line);
total += k;
}
br.close();
return total;
}
Note that the maximum possible return value is 10^17, which still easily fits in a long
, so we don't have to worry about overflows.
On my machine, running this 11 times and discounting the first run takes around 92.9 seconds.
Method 2: a minor tweak
Inspired by a comment on this question, I tried not creating a new int k
to store the result of parsing the line, and instead just to add the parsed value directly to total
. So this:
while ((line = br.readLine()) != null) {
int k = Integer.parseInt(line);
total += k;
}
becomes this:
while ((line = br.readLine()) != null)
total += Integer.parseInt(line);
I was certain that this wouldn't make any difference, and thought it highly likely that the compiler would generate the same bytecode for the two versions. But, to my surprise, it did shave a little time off: we're down to 92.1 seconds.
Method 3: manually parsing the integer
One thing that bothers me about the code so far is that we turn the String
into an int
, and then add it on at the end. Might it not be quicker to add on as we go? What happens if we parse the String
ourselves? Something like this...
private long sumLineByLineManualParse() throws NumberFormatException,
IOException {
BufferedReader br = new BufferedReader(new FileReader(file));
String line;
long total = 0;
while ((line = br.readLine()) != null) {
char chs[] = line.toCharArray();
int mul = 1;
for (int i = chs.length - 1; i >= 0; i--) {
char c = chs[i];
switch (c) {
case '0':
break;
case '1':
total += mul;
break;
case '2':
total += (mul << 1);
break;
case '4':
total += (mul << 2);
break;
case '8':
total += (mul << 3);
break;
default:
total += (mul*((byte) c - (byte) ('0')));
}
mul*=10;
}
}
br.close();
return total;
}
This, I thought, might save a little time, especially with some bitshift optimisations for doing the multiplication. But the overheads of converting to a character array must swamp any gains: this now takes 148.2 seconds.
Method 4: processing in binary
One last thing we can try is to process the file as binary data.
Parsing an integer from the front is awkward if you don't know the length of it. Parsing it backwards is much easier: the first digit you encounter is units, the next one is tens, and so on. So the easiest way to approach the whole thing is to read the file backwards.
If we allocate a byte[]
buffer of (say) 8MB, we can fill it up with the last 8MB of the file, process it, then read the preceding 8MB, and so on. We need to be a little careful that we don't screw up a number that we're in the middle of parsing when we move to the next block, but that's the only problem.
When we encounter a digit, we add it (suitably multiplied according to its position in the numeral) to the total, and then multiply the coefficient by 10 so we're ready for the next digit. If we encounter anything that isn't a digit (a CR or LF), we just reset the coefficient.
private long sumBinary() throws IOException {
RandomAccessFile raf = new RandomAccessFile(file, "r");
int lastRead = (int) raf.length();
byte buf[] = new byte[8*1024*1024];
int mul = 1;
long total = 0;
while (lastRead>0) {
int len = Math.min(buf.length, lastRead);
raf.seek(lastRead-len);
raf.readFully(buf, 0, len);
lastRead-=len;
for (int i=len-1; i>=0; i--) {
//48 is '0' and 57 is '9'
if ((buf[i]>=48) && (buf[i]<=57)) {
total+=mul*(buf[i]-48);
mul*=10;
} else
mul=1;
}
}
raf.close();
return total;
}
This runs in 30.8 seconds! That's a speed increase by a factor of 3 over the previous best.
Follow-up questions
- Why is this so much faster? I was expecting it to win, but not quite so impressively. Is it mainly the overheads of converting to a
String
? And all the worrying behind the scenes about character sets and the like? - Can we do any better than this by using a
MappedByteBuffer
to help? I have a feeling that the overheads of invoking methods to read from the buffer would slow things down, especially when reading backwards from the buffer. - Would it be better to read the file forwards rather than backwards, but still scan the buffer backwards? The idea would be that you read the first chunk of the file, and then scan backwards, but discarding the half-number at the end. Then when you read the next chunk, you set the offset so that you read from the beginning of the number you discarded.
- Is there anything I haven't thought of that could make a significant difference?
Update: more surprising results
First, an observation. It should have occurred to me before, but I think the reason for the inefficiency of the String
-based reading is not so much the time taken to create all the String
objects but the fact that they are so short-lived: we've got 100,000,000 of them for the garbage collector to deal with. That is bound to upset it.
Now some experiments based on answers/comments people have posted.
Am I cheating with the size of the buffer?
One suggestion was that since a BufferedReader
uses a default buffer of 16KB, and I've used a buffer of 8MB, I'm not comparing like with like. It's bound to be faster if you use a bigger buffer.
Here's the shock. The sumBinary()
method (Method 4) ran in 30.8 seconds yesterday with an 8MB buffer. Today, code unchanged, the wind direction has changed and we're at 30.4 seconds. If I drop the buffer size down to 16KB to see how much slower it gets, it gets faster! It now runs in 23.7 seconds. Crazy. Who saw that one coming?!
A bit of experimentation suggests that 16KB is about optimal. Perhaps the Java guys did the same experiments, and that's why they went with 16KB!
Is the problem I/O-bound?
I wondered about this too. How much time is spent on disk access, and how much on number crunching? If it's almost all disk access, as suggested by a well-supported comment on one of the proposed answers, then we won't be able to make much improvement whatever we do.
This is easy to test by running the code with all the parsing and number crunching commented out, but with the reading still intact:
private long sumBinary() throws IOException {
RandomAccessFile raf = new RandomAccessFile(file, "r");
int lastRead = (int) raf.length();
byte buf[] = new byte[16 * 1024];
int mul = 1;
long total = 0;
while (lastRead > 0) {
int len = Math.min(buf.length, lastRead);
raf.seek(lastRead - len);
raf.readFully(buf, 0, len);
lastRead -= len;
/*for (int i = len - 1; i >= 0; i--) {
if ((buf[i] >= 48) && (buf[i] <= 57)) {
total += mul * (buf[i] - 48);
mul *= 10;
} else
mul = 1;
}*/
}
raf.close();
return total;
}
This now runs in 3.7 seconds! This doesn't look I/O-bound to me.
Of course, some of the I/O speed will come from disk cache hits. But that isn't really the point here: we're still taking 20 seconds of CPU time (also confirmed using Linux's time
command), which is plenty big enough to try to reduce it.
Scanning forwards instead of backwards
I'd maintained in my original post that there was good reason to scan the file backwards rather than forwards. I didn't explain that very well. The idea was that if you scan a number forwards, you have to accumulate the total value of the scanned number, and then add it on. If you scan backwards, you can add it to the cumulative total as you go. My subconscious was making some sort of sense to itself (on which more later), but I'd missed one key point, which was pointed out in one of the answers: to scan backwards, I was doing two multiplications per iteration, but with scanning forwards you need only one. So I coded up a forward-scanning version:
private long sumBinaryForward() throws IOException {
RandomAccessFile raf = new RandomAccessFile(file, "r");
int fileLength = (int) raf.length();
byte buf[] = new byte[16 * 1024];
int acc = 0;
long total = 0;
int read = 0;
while (read < fileLength) {
int len = Math.min(buf.length, fileLength - read);
raf.readFully(buf, 0, len);
read += len;
for (int i = 0; i < len; i++) {
if ((buf[i] >= 48) && (buf[i] <= 57))
acc = acc * 10 + buf[i] - 48;
else {
total += acc;
acc = 0;
}
}
}
raf.close();
return total;
}
This runs in 20.0 seconds, beating the backward-scanning version by a distance. Nice.
Multiplication cache
What I realised during the night, though, was that although I was performing two multiplications per iteration, there was the possibility of using a cache to store these multiplications, so that I could avoid having to perform them during backwards iteration. I was pleased to see when I woke up that someone had had the same idea!
The point is that there are at most 10 digits in the numbers we're scanning, and only 10 possible digits, so only 100 possibilities for the value of a digit to the cumulative total. We can precompute these, and then use them in the backward-scanning code. That ought to beat the forward-scanning version, because we've now got rid of the multiplications entirely. (Note that we can't do this with forward scanning, because the multiplication is of the accumulator, which could take any value up to 10^9. It's only in the backward case that both operands are limited to a few possibilities.)
private long sumBinaryCached() throws IOException {
int mulCache[][] = new int[10][10];
int coeff = 1;
for (int i = 0; i < 10; i++) {
for (int j = 0; j < 10; j++)
mulCache[i][j] = coeff * j;
coeff *= 10;
}
RandomAccessFile raf = new RandomAccessFile(file, "r");
int lastRead = (int) raf.length();
byte buf[] = new byte[16 * 1024];
int mul = 0;
long total = 0;
while (lastRead > 0) {
int len = Math.min(buf.length, lastRead);
raf.seek(lastRead - len);
raf.readFully(buf, 0, len);
lastRead -= len;
for (int i = len - 1; i >= 0; i--) {
if ((buf[i] >= 48) && (buf[i] <= 57))
total += mulCache[mul++][buf[i] - 48];
else
mul = 0;
}
}
raf.close();
return total;
}
This runs in 26.1 seconds. Disappointing, to say the least. Reading backwards is less efficient in terms of I/O, but we've seen that I/O is not the major headache here. I had expected this to make a big positive difference. Perhaps the array lookup is just as expensive as the multiplications we've replaced. (I did try making the array 16x16, and using bitshifts to index, but it didn't help.)
Looks like forward scanning is where it's at.
Using a MappedByteBuffer
Next thing to add in is a MappedByteBuffer
, to see if that's more efficient than using a raw RandomAccessFile
. It doesn't need much change to the code.
private long sumBinaryForwardMap() throws IOException {
RandomAccessFile raf = new RandomAccessFile(file, "r");
byte buf[] = new byte[16 * 1024];
final FileChannel ch = raf.getChannel();
int fileLength = (int) ch.size();
final MappedByteBuffer mb = ch.map(FileChannel.MapMode.READ_ONLY, 0,
fileLength);
int acc = 0;
long total = 0;
while (mb.hasRemaining()) {
int len = Math.min(mb.remaining(), buf.length);
mb.get(buf, 0, len);
for (int i = 0; i < len; i++)
if ((buf[i] >= 48) && (buf[i] <= 57))
acc = acc * 10 + buf[i] - 48;
else {
total += acc;
acc = 0;
}
}
ch.close();
raf.close();
return total;
}
This does seem to improve things a little: we're now at 19.0 seconds. We've taken another second off our personal best!
What about multi-threading?
One of the proposed answers involves using multiple cores. I'm a little ashamed that that hadn't occurred to me!
The answer came in for some stick, because of the assumption that it's an I/O-bound problem. This seems a little harsh, in light of the results about I/O! Certainly worth a try, in any case.
We'll do this using fork/join. Here's a class to represent the result of a computation on part of the file, bearing in mind that there might be a partial result to the left (if we started half way through a number), and a partial result to the right (if the buffer finished half way through a number). The class also has a method for allowing us to glue two such results together, into a combined result for two adjacent sub-tasks.
private class SumTaskResult {
long subtotal;
int leftPartial;
int leftMulCount;
int rightPartial;
public void append(SumTaskResult rightward) {
subtotal += rightward.subtotal + rightPartial
* rightward.leftMulCount + rightward.leftPartial;
rightPartial = rightward.rightPartial;
}
}
Now the key bit: the RecursiveTask
that computes the result. For small problems (less than 64 characters), it calls computeDirectly()
to calculate the result in a single thread; for larger problems, it splits into two, solves the two sub-problems in separate threads, and then combines the results.
private class SumForkTask extends RecursiveTask<SumTaskResult> {
private byte buf[];
// startPos inclusive, endPos exclusive
private int startPos;
private int endPos;
public SumForkTask(byte buf[], int startPos, int endPos) {
this.buf = buf;
this.startPos = startPos;
this.endPos = endPos;
}
private SumTaskResult computeDirectly() {
SumTaskResult result = new SumTaskResult();
int pos = startPos;
result.leftMulCount = 1;
while ((buf[pos] >= 48) && (buf[pos] <= 57)) {
result.leftPartial = result.leftPartial * 10 + buf[pos] - 48;
result.leftMulCount *= 10;
pos++;
}
int acc = 0;
for (int i = pos; i < endPos; i++)
if ((buf[i] >= 48) && (buf[i] <= 57))
acc = acc * 10 + buf[i] - 48;
else {
result.subtotal += acc;
acc = 0;
}
result.rightPartial = acc;
return result;
}
@Override
protected SumTaskResult compute() {
if (endPos - startPos < 64)
return computeDirectly();
int mid = (endPos + startPos) / 2;
SumForkTask left = new SumForkTask(buf, startPos, mid);
left.fork();
SumForkTask right = new SumForkTask(buf, mid, endPos);
SumTaskResult rRes = right.compute();
SumTaskResult lRes = left.join();
lRes.append(rRes);
return lRes;
}
}
Note that this is operating on a byte[]
, rather than the whole MappedByteBuffer
. The reason for that is that we want to keep the disk access sequential. We'll take quite large chunks, fork/join, and then move to the next chunk.
Here's the method that does that. Note that we've pushed the buffer size up to 1MB (sub-optimal earlier, but more sensible here, it seems).
private long sumBinaryForwardMapForked() throws IOException {
RandomAccessFile raf = new RandomAccessFile(file, "r");
ForkJoinPool pool = new ForkJoinPool();
byte buf[] = new byte[1 * 1024 * 1024];
final FileChannel ch = raf.getChannel();
int fileLength = (int) ch.size();
final MappedByteBuffer mb = ch.map(FileChannel.MapMode.READ_ONLY, 0,
fileLength);
SumTaskResult result = new SumTaskResult();
while (mb.hasRemaining()) {
int len = Math.min(mb.remaining(), buf.length);
mb.get(buf, 0, len);
SumForkTask task = new SumForkTask(buf, 0, len);
result.append(pool.invoke(task));
}
ch.close();
raf.close();
pool.shutdown();
return result.subtotal;
}
Now here's the soul-destroying disappointment: this nicely multi-threaded code now takes 32.2 seconds. Why so slow? I spent quite a while debugging this, assuming I'd done something terribly wrong.
Turns out there was just one small tweak needed. I'd thought the threshold of 64 between small problem and big problem was a reasonable one; turns out that was totally ridiculous.
Think about it like this. The sub-problems are exactly the same size, so they should complete in pretty much the same time. So there's really no point splitting into more pieces than there are processors available. On the machine I'm using, with only two cores, going down to a threshold of 64 is ridiculous: it just adds more overhead.
Now you don't want to limit things so that it only uses two cores even when there are more available. Perhaps the right thing to do would be to find out the number of processors at runtime, and split into that many pieces.
In any case, if I change the threshold to 512KB (half the buffer size), it now completes in 13.3 seconds. Going down to 128KB or 64KB would allow more cores to be used (up to 8 or 16 respectively), and doesn't significantly affect the runtime.
So multi-threading does make a big difference.
It's been quite a long journey, but we started out with something that took 92.9 seconds and we're now down to 13.3 seconds... that's seven times the speed of the original code. And that's not by improving the asymptotic (big-Oh) time complexity, which was linear (optimal) right from the start... this has all been about improving the constant factor.
A good day's work.
I suppose I should probably try using the GPU next...
Postscript: generating the file of random numbers
I generated the random numbers with the following code, which I ran and redirected to a file. Obviously I can't guarantee that you'll end up with exactly the same random numbers that I had :)
public static void genRandoms() {
Random r = new Random();
for (int i = 0; i < 100000000; i++)
System.out.println(r.nextInt(1000000000));
}
BufferedReader
uses 16k. Even if reading a file backward would be much more efficient (it isn't!), you would gain much more by just increasing the buffer in the reader to the same level. You're comparing apples&oranges. – Thomas Jungblutcase
statement to use bit shifting in certain cases probably introduces more branching delays than perf improvements. Remove the case statement and just calculatetotal += (mul*((byte) c - (byte) ('0')));
on each pass of the for-loop. What is the run time now? – selbiefstream
being slow doesn't matter because the CPU is so much faster than the disk. Benchmarks always show that to be false. A lot of care is required to keep up with a modern disk streaming transfer of 120MB/s (spinning) or 500MB/s (solid state), and of course cache is faster yet. – Ben Voigt