5
votes

Running spark job with 1 TB data with following configuration :

33G executor memory 40 executors 5 cores per executor

17 g memoryoverhead

What are the possible reasons for this Error?

2
If you could post full error in console it would helpWoodChopper
Have you considered boosting the spark.yarn.executor.memoryOverhead?lxg
Thanks for the reply lxg. The spark.yarn.executor.memoryOverhead is 0.1 of executor memory and i have already given it 0.5 of executor's memory. How much should i increase this and what is happening at background which leads this warningRenu

2 Answers

3
votes

Where did you get that warning from? Which particular logs? Your lucky you even get a warning :). Indeed 17g seems like enough, but then you do have 1TB of data. I've had to use more like 30g for less data than that.

The reason for the error is that yarn uses extra memory for the container that doesn't live in the memory space of the executor. I've noticed that more tasks (partitions) means much more memory used, and shuffles are generally heavier, other than that I've not seen any other correspondences to what I do. Something somehow is eating memory unnecessarily.

It seems the world is moving to Mesos, maybe it doesn't have this problem. Even better, just use Spark stand alone.

More info: http://www.wdong.org/wordpress/blog/2015/01/08/spark-on-yarn-where-have-all-my-memory-gone/. This link seems kinda dead (it's a deep dive into the way YARN gobbles memory). This link may work: http://m.blog.csdn.net/article/details?id=50387104. If not try googling "spark on yarn where have all my memory gone"

1
votes

One possible issue is that your virtual memory is getting very large in proportion to your physical memory. You may want to set yarn.nodemanager.vmem-check-enabled to false in yarn-site.xml to see what happens. If the error stops, then that may be the issue.

I answered a similar question elsewhere and provided more information there: https://stackoverflow.com/a/42091255/3505110