9
votes

I ran Hadoop MapReduce on 1.1GB file multiple times with a different number of mappers and reducers (e.g. 1 mapper and 1 reducer, 1 mapper and 2 reducers, 1 mapper and 4 reducers, ...)

Hadoop is installed on quad-core machine with hyper-threading.

The following is the top 5 result sorted by shortest execution time:

+----------+----------+----------+
|  time    | # of map | # of red |
+----------+----------+----------+
| 7m 50s   |    8     |    2     |
| 8m 13s   |    8     |    4     |
| 8m 16s   |    8     |    8     |
| 8m 28s   |    4     |    8     |
| 8m 37s   |    4     |    4     |
+----------+----------+----------+

Edit

The result for 1 - 8 reducers and 1 - 8 mappers: column = # of mappers row = # of reducers

+---------+---------+---------+---------+---------+
|         |    1    |    2    |    4    |    8    |
+---------+---------+---------+---------+---------+
|    1    |  16:23  |  13:17  |  11:27  |  10:19  |
+---------+---------+---------+---------+---------+
|    2    |  13:56  |  10:24  |  08:41  |  07:52  |
+---------+---------+---------+---------+---------+
|    4    |  14:12  |  10:21  |  08:37  |  08:13  |  
+---------+---------+---------+---------+---------+
|    8    |  14:09  |  09:46  |  08:28  |  08:16  |
+---------+---------+---------+---------+---------+

(1) It looks that the program runs slightly faster when I have 8 mappers, but why does it slow down as I increase the number of reducers? (e.g. 8mappers/2reducers is faster than 8mappers/8reducers)

(2) When I use only 4 mappers, it's a bit slower simply because I'm not utilizing the other 4 cores, right?

2
As per your description, it's installed on 1 machine, so it's the master and there are no extra nodes?? Are you using a cluster?Aditya Peshave
@ADi Yes, it's installed on 1 machine which is a quad-core with hyper-threading.kabichan
How many times did you try each step? What do you see in counters, are reduce keys distributed well? It's pretty hard to say anything without counters, logs and configuration.rav
How many physical disks are available to this pseudo instance?Chris White
I would like to see 1,2,4,8 mappers each with 1,2,4,8 reducers also.Niels Basjes

2 Answers

17
votes

The optimal number of mappers and reducers has to do with a lot of things.

The main thing to aim for is the balance between the used CPU power, amount of data that is transported (in mapper, between mapper and reducer, and out the reducers) and the disk 'head movements'.

Each task in a mapreduce job works best if it can read/write the data 'with minimal disk head movements'. Usually described as "sequential reads/writes". But if the task is CPU bound the extra diskhead movements do not impact the job.

It seems to me that in this specific case you have

  • a mapper that does quite a bit of CPU cycles (i.e. more mappers make it go faster because the CPU is the bottle neck and the disks can keep up in providing the input data).
  • a reducer that does almost no CPU cycles and is mostly IO bound. This causes that with a single reducer you are still CPU bound, yet with 4 or more reducers you seem to be IO bound. So 4 reducers cause the disk head to move 'too much'.

Possible ways to handle this kind of situation:

First do exactly what you did: Do some test runs and see which setting performs best given this specific job and your specific cluster.

Then you have three options:

  • Accept the situation you have
  • Shift load from CPU to disk or the other way around.
  • Get a bigger cluster: More CPUs and/or more disks.

Suggestions for shifting the load:

  • If CPU bound and all CPUs are fully loaded then reduce the CPU load:

    • Check for needless CPU cycles in your code.
    • Switch to a 'lower CPU impact' compression codec: I.e. go from GZip to Snappy or to 'no compression'.
    • Tune the number of mappers/reducers in your job.
  • If IO bound and you have some CPU capacity left:

    • Enable compression: This makes the CPUs work a bit harder and reduces the work the disks have to do.
    • Experiment with various compression codecs (I recommend sticking with either Snappy or Gzip ... I very often go with Gzip).
    • Tune the number of mappers/reducers in your job.
0
votes

Quoting from "Hadoop The Definite Guide, 3rd edition", page 306

Because MapReduce jobs are normally I/O-bound, it makes sense to have more tasks than processors to get better utilization.

The amount of oversubscription depends on the CPU utilization of jobs you run, but a good rule of thumb is to have a factor of between one and two more tasks (counting both map and reduce tasks) than processors.

A processor in the quote above is equivalent to one logical core.

But this is just in theory and most likely each use case is different than another, like Niels detailed explanation, some tests need to be performed.