2
votes

I have a question regarding Apache Spark running on YARN in cluster mode. According to this thread, Spark itself does not have to be installed on every (worker) node in the cluster. My problem is with the Spark Executors: In general, YARN or rather the Resource Manager is supposed to decide about resource allocation. Hence, Spark Executors could be launched randomly on any (worker) node in the cluster. But then, how can Spark Executors be launched by YARN if Spark is not installed on any (worker) node?

1
Executors need to have Spark runtime available somehow. That could be either by installing on the nodes or shipping it with your application, e.g. in a fat jar that bundles Spark. I think... - LiMuBei
You don't have to include the binaries in a fatjar/uberjar -- it's automatically delivered by spark-submit. - Jacek Laskowski

1 Answers

2
votes

In a high level, When Spark application launched on YARN,

  1. An Application Master(Spark specific) will be created in one of the YARN Container.
  2. Other YARN Containers used for Spark workers(Executors)

Spark driver will pass serialized actions(code) to executors to process data.

spark-assembly provides spark related jars to run Spark jobs on a YARN cluster and application will have its own functional related jars.


Edit: (2017-01-04)

Spark 2.0 no longer requires a fat assembly jar for production deployment.source