4
votes

I am running Spark on YARN in client mode, so I expect that YARN will allocate containers only for the executors. Yet, from what I am seeing, it seems like a container is also allocated for the driver, and I don't get as many executors as I was expecting.

I am running spark submit on the master node. Parameters are as follows:

sudo spark-submit --class ... \
    --conf spark.master=yarn \
    --conf spark.submit.deployMode=client \
    --conf spark.yarn.am.cores=2 \
    --conf spark.yarn.am.memory=8G  \
    --conf spark.executor.instances=5 \
    --conf spark.executor.cores=3 \
    --conf spark.executor.memory=10G \
    --conf spark.dynamicAllocation.enabled=false \

While running this application, Spark UI's Executors page shows 1 driver and 4 executors (5 entries in total). I would expect 5, not 4 executors. At the same time, YARN UI's Nodes tab shows that on the node that isn't actually used (at least according to Spark UI's Executors page...) there's a container allocated, using 9GB of memory. The rest of the nodes have containers running on them, 11GB of memory each.

Because in my Spark Submit the driver has 2GB less memory than executors, I think that the 9GB container allocated by YARN is for the driver.

Why is this extra container allocated? How can i prevent this?

Spark UI:

Spark UI's Executor tab

YARN UI:

YARN UI's Nodes tab


Update after answer by Igor Dvorzhak

I was falsely assuming that the AM will run on the master node, and that it will contain the driver app (so setting spark.yarn.am.* settings will relate to the driver process).

So I've made the following changes:

  • set the spark.yarn.am.* settings to defaults (512m of memory, 1 core)
  • set the driver memory through spark.driver.memory to 8g
  • did not try to set driver cores at all, since it is only valid for cluster mode

Because AM on default settings takes up 512m + 384m of overhead, its container fits into the spare 1GB of free memory on a worker node. Spark gets the 5 executors it requested, and the driver memory is appropriate to the 8g setting. All works as expected now.

Spark UI:

Spark UI's Executor tab

YARN UI:

enter image description here

2

2 Answers

1
votes

Extra container is allocated for YARN application master:

In client mode, the driver runs in the client process, and the application master is only used for requesting resources from YARN.

Even though in client mode driver runs in the client process, YARN application master is still running on YARN and requires container allocation.

There are no way to prevent container allocation for YARN application master.

For reference, similar question asked time ago: Resource Allocation with Spark and Yarn.

-1
votes

You can specify the driver memory and number of executors in spark submit as below.

spark-submit --jars..... --master yarn --deploy-mode cluster --driver-memory 2g --driver-cores 4 --num-executors 5 --executor-memory 10G --executor-cores 3

Hope it helps you.