I have a spark application that keeps failing on error:
"Diagnostics: Container [pid=29328,containerID=container_e42_1512395822750_0026_02_000001] is running beyond physical memory limits. Current usage: 1.5 GB of 1.5 GB physical memory used; 2.3 GB of 3.1 GB virtual memory used. Killing container."
I saw lots of different parameters that was suggested to change to increase the physical memory. Can I please have the some explanation for the following parameters?
mapreduce.map.memory.mb(currently set to 0 so suppose to take the default which is 1GB so why we see it as 1.5 GB, changing it also dint effect the number)mapreduce.reduce.memory.mb(currently set to 0 so suppose to take the default which is 1GB so why we see it as 1.5 GB, changing it also dint effect the number)mapreduce.map.java.opts/mapreduce.reduce.java.optsset to 80% form the previous numberyarn.scheduler.minimum-allocation-mb=1GB(when changing this then I see the effect on the max physical memory, but for the value 1 GB it still 1.5G)yarn.app.mapreduce.am.resource.mb/spark.yarn.executor.memoryOverheadcan't find at all in configuration.
We are defining YARN (running with yarn-cluster deployment mode) using cloudera CDH 5.12.1.