Member since
03-22-2017
63
Posts
18
Kudos Received
12
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1937 | 07-08-2023 03:09 AM | |
4511 | 10-21-2021 12:49 AM | |
2083 | 04-01-2021 05:31 AM | |
2608 | 03-30-2021 04:23 AM | |
4830 | 03-23-2021 04:30 AM |
02-25-2024
11:15 PM
1 Kudo
@bkandalkar88, Did the response assist in resolving your query? If it did, kindly mark the relevant reply as the solution, as it will aid others in locating the answer more easily in the future.
... View more
02-20-2024
02:29 AM
1 Kudo
@KamProjectLead Did the response assist in resolving your query? If it did, kindly mark the relevant reply as the solution, as it will aid others in locating the answer more easily in the future.
... View more
07-13-2023
12:52 PM
@kaps_zk Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
05-02-2023
03:51 AM
Please add zkcli command to login in znode and remove directory. Hope you understand. zookeeper-client -server <zookeeper-server-host>:2181 (May use sudo if permission issue or login from HDFS User) ls / or ls /hadoop-ha (If you don't see any znode /hadoop-ha in ZK znode list, skip the step below) rmr /hadoop-ha/nameservice1
... View more
10-22-2021
12:25 AM
@PabitraDas The objective is to copy data between two distinct clusters
... View more
10-21-2021
12:49 AM
@DA-Ka You need to use HDFS Find tool "org.apache.solr.hadoop.HdfsFindTool" for that purpose. Refer below links which suggests some method to fid the old Files. - http://35.204.180.114/static/help/topics/search_hdfsfindtool.html However, the search-based HDFS find tool has been removed and is superseded in CDH 6 by the native "hdfs dfs -find" command, documented here: https://hadoop.apache.org/docs/r3.1.2/hadoop-project-dist/hadoop-common/FileSystemShell.html#find
... View more
08-31-2021
11:48 PM
Will formatting zkfc and restarting namenode work as this issue is basically due to communication failure between HealthcheckRPC of zkfc and local namenode.
... View more
03-30-2021
08:43 AM
@abagal / @PabitraDas Appreciate all your assistance / inputs on this. Thanks Wert
... View more
03-30-2021
04:23 AM
1 Kudo
Hello @Amn_468 Please note that, you get the block count alert after hitting the warning/critical threshold value set in HDFS Configuration. It is a Monitoring alert and doesn't impact any HDFS operations as such. You may increase the monitoring threshold value in CM ( CM > HDFS > Configurations > DataNode Block Count Thresholds) However, CM monitors the block counts on the DataNodes is to ensure you are not writing many small files into HDFS. Increase in block counts on DNs is an early warning of small files accumulation in HDFS. The simplest way to check if you are hitting small files issue is to check the average block size of HDFS files. Fsck should show the average block size. If it's too low a value (eg ~ 1MB), you might be hitting the problems of small files which would be worth looking at, otherwise, there is no need to review the number of blocks. [..] $ hdfs fsck / .. ... Total blocks (validated): 2899 (avg. block size 11475601 B) <<<<< [..] Similarly, you can get the average file size in HDFS by running a script as follows: $hdfs dfs -ls -R / | grep -v "^d" |awk '{OFMT="%f"; sum+=$5} END {print "AVG File Size =",sum/NR/1024/1024 " MB"}' The file size reported by Reports Manager under "HDFS Reports" in Cloudera Manager can be different as the report is extracted from >1hour old FSImage (not a latest one). Hope this helps. Any question further, feel free to update the thread. Else mark solved. Regards, Pabitra Das
... View more