Member since
03-29-2019
66
Posts
2
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1367 | 08-16-2023 09:33 AM | |
1661 | 06-21-2021 04:15 AM | |
1775 | 06-16-2021 01:08 AM | |
3736 | 05-02-2021 08:43 PM | |
1006 | 01-19-2020 08:07 AM |
08-16-2023
10:18 PM
@skommineni, if the recommendation from @amk helps you resolve your issue, can you please mark the appropriate reply as the solution? This will make it easier for others to find the answer in the future.
... View more
08-25-2021
08:20 AM
You can find the error message 'Index build failed for service hdfs' in the RM log. This issue is caused by exceptional ' /var/lib/cloudera-scm-headlamp/' where the Reports Manager role is configured to run. step 1: stop CMS Reports Manager role step 2: RUN THIS ON THE HOST WHERE RM ROLE RUNS sudo rm -rf /var/lib/cloudera-scm-headlamp/* step 3: start CMS Reports Manager role (will rebuild the headlamp dir) Done. You can now use CM> HDFS> File Browser.
... View more
07-06-2021
05:35 AM
@qiang Have you resolved your issue and if so would you mind sharing the solution and marking this thread as solved? If you are still experiencing the issue, can you provide the information @amk has requested?
... View more
06-27-2021
11:50 PM
Hi @FEIDAI, has any of the replies resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
06-16-2021
01:08 AM
1 Kudo
Hello @pauljoshiva The NameNode endeavors to ensure that each block always has the intended number of replicas. The NameNode detects that a block has become under- or over-replicated when a block report from a DataNode arrives. When a block becomes over replicated, the NameNode chooses a replica to remove. The NameNode will prefer not to reduce the number of racks that host replicas, and secondly prefer to remove a replica from the DataNode with the least amount of available disk space. The goal is to balance storage utilization across DataNodes without reducing the block's availability. Hope this answers your query. Regards, Manoj
... View more
03-14-2021
09:02 PM
We are seeing port bind exception in the error stack trace which basically means the secondary NN service is unable to register itself on that port. Port 50090 is the internal port defined by this property "dfs.secondary.http.address or dfs.namenode. secondary. http-address". So please run "netstat -anp | grep 50090" and see which process is using the port in question. Stop that process and try starting Secondary NN service or else we need to change the default port in the abovementioned property to some other non-used port.
... View more
03-12-2021
12:07 AM
Hi, I don't think there is a way to retrieve this information via API rest. Make a python script, from there you should be able to retrieve the quota (https://pyhdfs.readthedocs.io/en/latest/pyhdfs.html). You can try to configure a custom ambari alert whit the script. Let me know if you can because it could be useful to many people.
... View more
01-19-2020
08:07 AM
How much data did you delete ? Did checkpoint happen after you deleted data ? Also please check if any snapshots are present ? HDFS CLI "du" output not only include normal files but also includes the files that have been deleted and exist in snapshots (which is true in terms of real resource consumption). Please check the output using -x flag which excludes snapshot from calculation. hdfs dfs -du -x -s -h /path
... View more