I develop Map/Reduce using Hadoop. My driver Program submit a MapReduce job (with a Map and Reduce Task) to the Job tracker of Hadoop. I have two questions: a) Can my Map or reduce task submit another MapReduce Job? (with the same cluster Hadoop and to the same Job Tracker). That means, my begining driver program submit a mapreduce job in which, its map or reduce task spawn another MapReduce job and submit it to the same cluster Hadoop and to the same Job Tracker. I think it's possible. But I'am not sure. Moreover, it a good solution? If not, can we have another solution?
b) Can we use two Map tasks (with two different functions and one Reduce task in a MapReduce job? Thanks a lot