I'm running a local hadoop cluster and try to submit two jobs simultaneously but my first job goes through and the second one doesn't and stays in UNASSIGNED state until the first one gets finished. I got a hunch that there is a memory problem however I can't quite figure it out. Here are the values I set for the container, mapper, reduce, jvm etc.
yarn.nodemanager.resource.memory-mb=40960
yarn.scheduler.minimum-allocation-mb=4096
yarn.scheduler.maximum-allocation-mb=10240
mapreduce.map.java.opts=-Xmx5120m
mapreduce.reduce.java.opts=-Xmx5120m
mapreduce.map.memory.mb=8192
mapreduce.reduce.memory.mb=8192
The rest of the properties got their default values. Is there anything wrong with my values? Is there anything else I should change?