2
votes

I'm using 8 node hadoop cluster and namenode memory usage shows as 7%. I feel this cluster going to reach maximum limits of namenode memory.

Current cluster storage size is 5.6TB and namenode heap size is 4GB. Further block size is 128MB.

What is the way of calculating maximum number of datanodes that can have for Hadoop cluster (with single namenode)?

1

1 Answers

2
votes

Namenode memory usage is proportional to the number of blocks also the guideline is 1 million blocks take 1 GB memory...and with you 5.6 TB and 128 MB block size i calculated you can have around half million blocks without even replication(you can recalculate). So, my conclusion is you have enough memory unless there are lots of small files...