4
votes

Are there any ways to improve the MapReduce performance by changing the number of map tasks or changing the split sizes of each mapper? For example, I have a 100GB text file and 20 nodes. I want to run a WordCount job on the text file, what is the ideal number of mappers or the ideal split size so that it can be done faster?

Would it be faster with more mappers? Would it be faster with a smaller split size?

EDIT

I am using hadoop 2.7.1, just so you know there is YARN.

1

1 Answers

1
votes

It is not necessarily faster when you use more mappers. Each mapper has a start up and setup time. In the early days of hadoop when mapreduce was the de facto standard it was said that a mapper should run ~10 minutes. Today the documentations recommends 1 minute. You can vary the number of map tasks by using setNumMapTasks(int) which you can define within the JobConf. IN the documentation of the method are very good information about the mapper count:

How many maps?

The number of maps is usually driven by the total size of the inputs i.e. total number of blocks of the input files.

The right level of parallelism for maps seems to be around 10-100 maps per-node, although it has been set up to 300 or so for very cpu-light map tasks. Task setup takes awhile, so it is best if the maps take at least a minute to execute.

The default behavior of file-based InputFormats is to split the input into logical InputSplits based on the total size, in bytes, of input files. However, the FileSystem blocksize of the input files is treated as an upper bound for input splits. A lower bound on the split size can be set via mapreduce.input.fileinputformat.split.minsize.

Thus, if you expect 10TB of input data and have a blocksize of 128MB, you'll end up with 82,000 maps, unless setNumMapTasks(int) is used to set it even higher.

Your question is probably related to this SO question.

To be honest, try to have a look at modern frameworks as well, like Apache Spark and Apache Flink.