It's the other way round. Number of mappers is decided based on the number of splits. In reality it is the job of InputFormat
, which you are using, to create the splits. You do not have any idea about the number of mappers until number of splits has been decided. And, it's not always that splits will be created based on the HDFS block size. It totally depends on the logic inside the getSplits()
method of your InputFormat.
To better understand this, assume you are processing data stored in your MySQL using MR. Since there is no concept of blocks in this case, the theory that splits are always created based on the HDFS block fails. Right? What about splits creation then? One possibility is to create splits based on ranges of rows in your MySQL table (and this is what DBInputFormat
does, an input format for reading data from a relational database). Suppose you have 100 rows. Then you might have 5 splits of 20 rows each.
It is only for the InputFormats based on FileInputFormat
(an InputFormat for handling data stored in files) that the splits are created based on the total size, in bytes, of the input files. However, the FileSystem blocksize of the input files is treated as an upper bound for input splits. If you have a file smaller than the HDFS block size, you'll get only 1 mapper for that file. If you want to have some different behavior, you can use mapred.min.split.size. But it again depends solely on the getSplits() of your InputFormat.
There is a fundamental difference between MR split
and HDFS block
and folks often get confused by this. A block is a physical piece of data while a split is just a logical piece which is going to be fed to a mapper. A split does not contain the input data, it is just a reference to the data. Then what is a split? A split basically has 2 things : a length in bytes
and a set of storage locations
, which are just hostname strings.
Coming back to your question. Hadoop allows much more than 200 mappers. Having said that, it doesn't make much sense to have 200 mappers for just 500MB of data. Always remember that when you talk about Hadoop, you are dealing with very huge data. Sending just 2.5 MB data to each mapper would be an overkill. And yes, if there are no free CPU slots then some mappers may run after the completion of current mappers. But the MR framework is very intelligent and tries its best to avoid these kind of situation. If the machine where data to processed is present, doesn't have any free CPU slots, the data will be moved to a nearby node, where free slots are available, and get processed.
HTH