1
votes

I'm running a local hadoop cluster and try to submit two jobs simultaneously but my first job goes through and the second one doesn't and stays in UNASSIGNED state until the first one gets finished. I got a hunch that there is a memory problem however I can't quite figure it out. Here are the values I set for the container, mapper, reduce, jvm etc.

yarn.nodemanager.resource.memory-mb=40960

yarn.scheduler.minimum-allocation-mb=4096

yarn.scheduler.maximum-allocation-mb=10240

mapreduce.map.java.opts=-Xmx5120m

mapreduce.reduce.java.opts=-Xmx5120m

mapreduce.map.memory.mb=8192

mapreduce.reduce.memory.mb=8192

The rest of the properties got their default values. Is there anything wrong with my values? Is there anything else I should change?

2

2 Answers

0
votes

I solved the problem it was because of ''yarn.scheduler.capacity.maximum-am-resource-percent'' property. I set it to a higher value.

0
votes

What is your datanode/slave configuration? You have specified to use 40GB memory for each container. Clearly, your datanode is unable to allocate memory for more than one container. You can adjust these setting according to what you have. See yarn-default.xml in docs. Thank you.