6
votes

is it possible to process values with same key on different reducers ? from all mappers i got data with same key and i want to process it with different reducers ? my confusion is that the book says all values with same key will go to same reducer ...

 mapper1(k1,v1),mapper2(k1,v2),mapper3(k1,v3) and so on...

i don't want all data to same reducer ...it should be like,

 reducer1(k1,v1),reducer2(k1,v2)....

and lets say reducer1 produce sum1 and reducer2 produce sum2 and i want that

 sum=sum2+sum1

how should i do that ?

2
Is there a reason why you cant do the above using a combiner.. and then sum up the outputs of the combiners in the reducer? - Suchet
in that suppose i have very big data (lets say i have huge number of rows in a matrix and in the end i want sum of all elements), i can easily sum up this for one split in combiner now if want the sum as a whole i need to put output of all combiners in one reduce(i dnt 9 othr way) which leads to a very slow process ... - Divyendra
You are not benefiting from the distributed nature of Hadoop. Partition your data such that more mappers work on your input files simultaneously. Problems like these are trivial. - Suchet
problem is not with mappers...they will be multiple but is that if i want total sum then i would be forced to use one reducer and in that case one reducer for all the mapper outputs would make the computing slow ... plz correct me if i m wrong ? - Divyendra
Even if you have a billion numbers to add, and split them over 10 mappers with 100 million numbers each, your combiners will work to sum-up the numbers from each mapper, and the reducer will then have to work to add only 10 numbers to get the grand total sum. Should be pretty fast for the reducer. - Suchet

2 Answers

5
votes

Data with the same key will always go to the same reducer. But you can choose whatever key you want, so if you want them to go to different reducers, then just choose different keys.

If you want to do an additional combination based on the output from your reducers, then you must do another MapReduce job, with the output from the first job as the input to the next one. This can get ugly fast, so you may wish to look at Cascading, Pig, or Hive to simplify things.

2
votes

You can write a Custom Partitioner for your case, which overrides the default partitioning functionality of Hadoop MR job.

More details here: http://developer.yahoo.com/hadoop/tutorial/module5.html#partitioning