2
votes

I am doing a spark-submit like this spark-submit --class com.mine.myclass --master yarn-cluster --num-executors 3 --executor-memory 4G spark-examples_2.10-1.0.jar

in the web ui, I can see indeed there are 3 executor nodes, but each has 2G of memory. When I set --executor-memory 2G, then ui shows 1G per node.

How did it figure to reduce my setting by 1/2?

1

1 Answers

15
votes

The executor page of the Web UI is showing the amount of storage memory, which is equal to 54% of Java heap by default (spark.storage.safetyFraction 0.9 * spark.storage.memoryFraction 0.6)