I am running a test job which takes zipped 5gb of data and dumps into mongoDB . I have 1 master and 3 slave each 16 CPU ,30gb RAM . After the job submission it seems like spark only uses 2 slave node for the job and assign 32 cores for the job although i am using dynamic allocation for my job.This job is the only running job on this cluster due to which i expected around 47 cores(1 left for application master yarn) to be used acorss 3 nodes .I am using AWS EMR and yarn in my cluster .
Is there a particular reason why only 2 nodes take part in the job and only 32 cores are allocated for the job using dynamic allocation .