0
votes

How to find why "Reserved Space for Replicas" constantly increasing and how to limit space for this type of cache?

We found that the "Reserved Space for Replicas" relate to "Non DFS" space and it can increase constantly up to rebooting Data Node-s. We didn't find how to limit space allocated for the "Reserved Space for Replicas" :(

We thought that dfs.datanode.du.reserved can control "Reserved Space for Replicas" but it is not.

So our question is How to control a space size allocated for the "Reserved Space for Replicas" in Hadoop ?

1

1 Answers

0
votes

For people who faced with that type of problem. First of all you should understand the nature of the problem. To do that please read the description of the following issues:

The following links will be useful to understand what block replica is:

Solutions

  1. Find the wrong software which often broke connection with Hadoop during the write or append processes
  2. Try to change replication policy (risky)
  3. Update Hadoop up to late version

You can't reset “Reserved Space for Replicas” without restart Hadoop services!