Can somebody please help me in understanding below questions related to Hadoop 1.x?
Say I have just a single node where I have 8 GB of RAM and 40 TB of hard disk with quad core processor. Block size is 64 MB. We need to process 4 TB of data. How do we decide the number of Mappers and Reducers?
Can someone please explain in detail? Please let me know if I need to consider any other parameter for calculation.
Say I have 10 Data nodes in a cluster and each node is having 8 GB of RAM and 40 TB of Hard disk with quad core processor. Block size is 64MB. We need to process 40 TB data. How do we decide the number of Mappers and Reducers?
What is the default number for mapper and reducer slots in a Data node with quad core processor?
Many Thanks, Manish