Member since
01-05-2020
5
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1369 | 06-02-2020 10:22 PM |
06-02-2020
10:22 PM
For people who faced with that type of problem. First of all you should understand the nature of the problem. To do that please read the description of the following issues: https://issues.apache.org/jira/browse/HDFS-8072 https://issues.apache.org/jira/browse/HDFS-9530 The following links will be useful to understand what block replica is: https://blog.cloudera.com/understanding-hdfs-recovery-processes-part-1/ https://blog.cloudera.com/understanding-hdfs-recovery-processes-part-2/ Solutions 1. Find the wrong software which often broke connection with Hadoop during the write or append processes 2. Try to change replication policy (risky) 3. Update Hadoop up to late version You can't reset “Reserved Space for Replicas” without restart Hadoop services!
... View more
06-01-2020
10:13 PM
We found that the "Reserved Space for Replicas" not relate to "Non DFS" space
... View more
06-01-2020
08:23 AM
How to find why "Reserved Space for Replicas" constantly increasing and how to control/set a space size allocated for the "Reserved Space for Replicas" in Hadoop? We found that the "Reserved Space for Replicas" not relate to "Non DFS" space and it can increase constantly up to rebooting Data Node-s. We didn't find 😞 how to limit space allocated for the "Reserved Space for Replicas" 😞 We thought that dfs.datanode.du.reserved can control "Reserved Space for Replicas" but it is not!
... View more
Labels:
- Labels:
-
Apache Hadoop