0
votes

I am running a Map-Reduce job over an application that runs on top of Hadoop. It runs ok for smaller datasets, but increasing the data size causes it to fail with a message like the one below.

I tried with various configurations of memory in mapred.child.*.java.opts but without success. The process runs till 6% or 7% and then fails. If the data size is reduced it will run for a higher percentage value and then fail. I can see that this particular process is assigned to only one mapper.

java.lang.Throwable: Child Error at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:250) Caused by: java.io.IOException: Task process exit with nonzero status of 137. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:237)

1
You might want to explain the logic of your failing process or post the code if it's not too long.Peter Brittain

1 Answers

0
votes

Possible reason: the memory allocated for the tasks trackers (sum of mapred.*.child.java.opt in mapred-site.xml) is more than the nodes actual memory