Member since
03-29-2019
66
Posts
2
Kudos Received
5
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3165 | 08-16-2023 09:33 AM | |
| 2924 | 06-21-2021 04:15 AM | |
| 2848 | 06-16-2021 01:08 AM | |
| 5734 | 05-02-2021 08:43 PM | |
| 1632 | 01-19-2020 08:07 AM |
08-16-2023
10:18 PM
@skommineni, if the recommendation from @amk helps you resolve your issue, can you please mark the appropriate reply as the solution? This will make it easier for others to find the answer in the future.
... View more
08-25-2021
08:20 AM
You can find the error message 'Index build failed for service hdfs' in the RM log. This issue is caused by exceptional ' /var/lib/cloudera-scm-headlamp/' where the Reports Manager role is configured to run. step 1: stop CMS Reports Manager role step 2: RUN THIS ON THE HOST WHERE RM ROLE RUNS sudo rm -rf /var/lib/cloudera-scm-headlamp/* step 3: start CMS Reports Manager role (will rebuild the headlamp dir) Done. You can now use CM> HDFS> File Browser.
... View more
06-27-2021
11:50 PM
Hi @FEIDAI, has any of the replies resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
06-16-2021
01:08 AM
1 Kudo
Hello @pauljoshiva The NameNode endeavors to ensure that each block always has the intended number of replicas. The NameNode detects that a block has become under- or over-replicated when a block report from a DataNode arrives. When a block becomes over replicated, the NameNode chooses a replica to remove. The NameNode will prefer not to reduce the number of racks that host replicas, and secondly prefer to remove a replica from the DataNode with the least amount of available disk space. The goal is to balance storage utilization across DataNodes without reducing the block's availability. Hope this answers your query. Regards, Manoj
... View more
01-19-2020
08:07 AM
How much data did you delete ? Did checkpoint happen after you deleted data ? Also please check if any snapshots are present ? HDFS CLI "du" output not only include normal files but also includes the files that have been deleted and exist in snapshots (which is true in terms of real resource consumption). Please check the output using -x flag which excludes snapshot from calculation. hdfs dfs -du -x -s -h /path
... View more