I'm unable to understand the internal mechanism of allocation of resources to Map Reduce and Spark jobs.
In the same cluster we can run Map Reduce and Spark jobs, however for running map reduce jobs internal resource manager will allocate available resources like data node and task trackers to the job. Internally job my required 'N' number of mappers and reducers.
When it comes to Spark context it needs worker nodes and executors(Internally JVM) to compute the program.
Is that mean there will be different nodes for Map Reduce and Spark jobs? If not how will the differentiation will happen between Task tracker and Executors. How will cluster manager identifies the specific node for Hadoop and Spark job.
Can someone enlighten me here.