Member since
01-07-2020
36
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3728 | 04-04-2022 04:17 AM |
04-12-2022
02:14 PM
@yagoaparecidoti Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks!
... View more
04-04-2022
04:17 AM
Hello If the query is resolved can you kindly mark this as done?
... View more
01-06-2022
07:31 AM
First of validate in zookeeper if there are entries for the hbase id. There is another easy way to wipe the slate clean bin/hbase clean Select the options -cleanAll which will delete HDFS data and also the zookeeper data. This should clean the things and get the things going. ** Make sure to stop the Hbase service when you are doing this. OR You can use -cleanZk option to delete only the zookeeper data and re populate the same. Steps remain the same, bring down the Hbase service and run these commands from admin/master nodes. **These actions can't be reverted.
... View more
09-23-2021
07:51 AM
1 Kudo
HDFS data might not always be distributed uniformly across DataNodes. One common reason is addition of new DataNodes to an existing cluster. HDFS provides a balancer utility that analyzes block placement and balances data across the DataNodes. The balancer moves blocks until the cluster is deemed to be balanced, which means that the utilization of every DataNode (ratio of used space on the node to total capacity of the node) differs from the utilization of the cluster (ratio of used space on the cluster to total capacity of the cluster) by no more than a given threshold percentage. The balancer does not balance between individual volumes on a single DataNode. To free up the spaces in particular datanodes. You can use a block distribution application to pin its block replicas to particular datanodes so that the pinned replicas are not moved for cluster balancing. https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.6.0/bk_hdfs-administration/content/overview_hdfs_balancer.html
... View more
11-29-2020
12:09 AM
When we restart the JournalNode Quorum the epoch number will change. We usually see that the errors when the JournalNodes are not in sync. Check for the writer epoch on current dir for JournalNode process, which one of the JournalNodes is lacking we can manually copy the files from working JournalNode and it will pick up. This should happen automatically when we restart the JournalNodes, if not then above is the procedure.
... View more