I have a Spark Cluster running in YARN mode on top of HDFS. I launched one worker with 2 cores and 2g of memory. Then I submitted a job with dynamic configuration of 1 executor with 3 cores. Still, my job is able to run. Can somebody explain the difference between the number of cores with which the worker is launched and the ones requested for the executors. My understanding was since the executors run inside the workers they cannot acquire more resources than those available for the worker.