6
votes

I know that both Under-replicated blocks and Mis-replicated blocks occur due to lesser data node count with respect to replication factor set.

But what is the difference between them?

On re-setting replication factor to 1 where available data node is 1, both Under-replicated blocks and Missing replica error got cleared. Ensured this by executing command hdfs fsck / FSCK report

1

1 Answers

14
votes

From "Hadoop: The Definitive Guide" by Tom White:

Over-replicated blocks These are blocks that exceed their target replication for the file they belong to. Normally, over-replication is not a problem, and HDFS will automatically delete excess replicas.

Under-replicated blocks These are blocks that do not meet their target replication for the file they belong to. HDFS will automatically create new replicas of under-replicated blocks until they meet the target replication. You can get information about the blocks being replicated (or waiting to be replicated) using hdfs dfsadmin -metasave .

Misreplicated blocks These are blocks that do not satisfy the block replica placement policy (see Replica Placement). For example, for a replication level of three in a multirack cluster, if all three replicas of a block are on the same rack, then the block is misreplicated because the replicas should be spread across at least two racks for resilience. HDFS will automatically re-replicate misreplicated blocks so that they satisfy the rack placement policy.

Corrupt blocks These are blocks whose replicas are all corrupt. Blocks with at least one noncorrupt replica are not reported as corrupt; the namenode will replicate the noncorrupt replica until the target replication is met.

Missing replicas These are blocks with no replicas anywhere in the cluster.

Hope this answers your question.