Member since
08-08-2020
42
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
258 | 11-14-2022 07:14 AM |
12-05-2022
11:11 AM
Hello @hanumanth , If Zookeeper services look up and running, you may need to compare the Spark job failure timestamp against Zookeeper logs from the Leader sever. if there is not a visible issue from Zookeeper side you can check if the hbase client configurations were applied properly in the spark job configurations. Also, confirm that the Hbase service is up and functional as well. If the above does not help, you may want to raise a support ticket with the Spark component.
... View more
11-14-2022
07:14 AM
@hanumanth you may check if the files you deleted from HDFS still exist somehow in HDFS and if so, check the replication factor applied to them, this can be contained in the second column of the hdfs -ls output, for this you can collect a recursive listing by running hdfs dfs -ls -R / > hdfs_recursive After you can try a filter to this file to know the RF factor applied to your files: hdfs dfs -ls -R / | awk '{print $2}' | sort | uniq -c Also, ensure that there is no other content from other processes (something different than HDFS) filling up the mount points used to store the HDFS blocks. You can get also the following outputs to compare if existing data is protected by snapshots or not: #1 du -s -x #2 du -s ---> Du command usage [1] If the results of the above commands differ between them probably there are snapshots still present that are avoiding blocks to be deleted from datanodes.
... View more
11-07-2022
08:04 AM
What is the current HDP, CDH, or CDP version of this cluster? Is this issue present on every browser? Did it work as expected before?
... View more