Here I am assuming that I have a single cluster of 4 nodes and I have a data volume of 500GB. Then in Hadoop1 with default block size(64Mb) how will the data blocks will be assigned to the node also I'm assuming the replication factor as 3.
My understanding: If I have 200Mb data then in Hadoop1 with default block size(64Mb) the data is split into 4 blocks 64+64+64+8 and in the four nodes all four blocks will be present and replicas.
I have added a picture to show my understanding. If my understanding is correct then how will it work for 500Mb data if not help me understand. My understanding of HDFS