2
votes

I am using Spark on Hadoop and want to know how Spark allocates the virtual memory to executor.

As per YARN vmem-pmem, it gives 2.1 times virtual memory to the container.

Hence - if XMX is 1 GB then --> 1 GB * 2.1 = 2.1 GB is allocated to the container.

How does it work on Spark? And is the below statement is correct?

If I give Executor memory = 1 GB then,

Total virtual memory = 1 GB * 2.1 * spark.yarn.executor.memoryOverhead. Is this true?

If not, then how is virtual memory for an executor calculated in Spark?

1
can you please check my answer ? - backtrack

1 Answers

0
votes

For Spark executor resources, yarn-client and yarn-cluster modes use the same configurations:

Enter image description here

In spark-defaults.conf, spark.executor.memory is set to 2 GB.

I got this from: Resource Allocation Configuration for Spark on YARN