Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

How to reset “Reserved Space for Replicas” without restart Hadoop services

avatar
Explorer

How to find why "Reserved Space for Replicas" constantly increasing and how to control/set a space size allocated for the "Reserved Space for Replicas" in Hadoop?

 

We found that the "Reserved Space for Replicas" not relate to "Non DFS" space and it can increase constantly up to rebooting Data Node-s. We didn't find 😞 how to limit space allocated for the "Reserved Space for Replicas" 😞

 

We thought that dfs.datanode.du.reserved can control "Reserved Space for Replicas" but it is not!

1 ACCEPTED SOLUTION

avatar
Explorer

For people who faced with that type of problem. First of all you should understand the nature of the problem. To do that please read the description of the following issues:

https://issues.apache.org/jira/browse/HDFS-8072
https://issues.apache.org/jira/browse/HDFS-9530


The following links will be useful to understand what block replica is:

https://blog.cloudera.com/understanding-hdfs-recovery-processes-part-1/
https://blog.cloudera.com/understanding-hdfs-recovery-processes-part-2/


Solutions

1. Find the wrong software which often broke connection with Hadoop during the write or append processes

2. Try to change replication policy (risky)

3. Update Hadoop up to late version

 

You can't reset “Reserved Space for Replicas” without restart Hadoop services!

View solution in original post

2 REPLIES 2

avatar
Explorer

We found that the "Reserved Space for Replicas" not relate to "Non DFS" space

avatar
Explorer

For people who faced with that type of problem. First of all you should understand the nature of the problem. To do that please read the description of the following issues:

https://issues.apache.org/jira/browse/HDFS-8072
https://issues.apache.org/jira/browse/HDFS-9530


The following links will be useful to understand what block replica is:

https://blog.cloudera.com/understanding-hdfs-recovery-processes-part-1/
https://blog.cloudera.com/understanding-hdfs-recovery-processes-part-2/


Solutions

1. Find the wrong software which often broke connection with Hadoop during the write or append processes

2. Try to change replication policy (risky)

3. Update Hadoop up to late version

 

You can't reset “Reserved Space for Replicas” without restart Hadoop services!