I know data uploaded into hdfs are replicated across datanodes in a hadoop cluster as blocks. My question is what happens when the capacity of all datanodes in the cluster put together is insufficient? e.g. I have 3 datanodes each with a 10GB data capacity (30GB altogether) and I want to insert a data of size 60GB into the hdfs on the same cluster. I don't see how the 60GB data can be split into blocks (~64MB typically) to be accommodated by the datanodes?
Thanks