In the MapReduce framework, one reducer is used for each key generated by the mapper.
So you would think that specifying the number of Reducers in Hadoop MapReduce wouldn't make any sense because it's dependent on the program. However, Hadoop allows you to specify the number of reducers to use (-D mapred.reduce.tasks=# of reducers).
What does this mean? Is the parameter value for number of reducers specifying how many machine resources go to the reducers instead of the number of actual reducers used?