Member since
01-07-2020
36
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3729 | 04-04-2022 04:17 AM |
04-05-2022
06:26 AM
There are 2 ways. One is directly adding it in hdfs-site.xml OR just triggering the balancer with these parameters like nohup hdfs balancer -Ddfs.balancer.moverThreads=300 -Ddfs.datanode.balance.max.concurrent.moves=20 -Ddfs.datanode.balance.bandwidthPerSec=20480000 -Ddfs.balancer.dispatcherThreads=400 -Ddfs.balancer.max-size-to-move=100737418240 -threshold 10 >/tmp/new_balancer1.out This will run the balancer in non default values and it will finish the balancer operation much more quicker. ** Be aware that the run using above command and parameter will cause high Bandwidth usage and will create lot of i/o storms. For more details on the parameters mentioned above please refer below doc https://hadoop.apache.org/docs/r2.9.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
... View more
04-05-2022
12:37 AM
Hello, The error is due to the exhausted thread quota on the DN side. Usually this can be controlled using the balancer parameters. Kindly refer https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.0.1/data-storage/content/properties_for_configuring_the_balancer.html Ideally changing the values for "dfs.datanode.balance.max.concurrent.moves" should help come out of the issue. n/w bandwidth can become an issue while we are dealing with large volume of data movement but according to this error it's on the quota.
... View more
04-04-2022
04:17 AM
Hello If the query is resolved can you kindly mark this as done?
... View more
01-06-2022
07:31 AM
First of validate in zookeeper if there are entries for the hbase id. There is another easy way to wipe the slate clean bin/hbase clean Select the options -cleanAll which will delete HDFS data and also the zookeeper data. This should clean the things and get the things going. ** Make sure to stop the Hbase service when you are doing this. OR You can use -cleanZk option to delete only the zookeeper data and re populate the same. Steps remain the same, bring down the Hbase service and run these commands from admin/master nodes. **These actions can't be reverted.
... View more
09-17-2021
03:10 AM
Hello If you have unbalanced disks in cluster please use the interdisk balancer. So usually it would be in RoundRobin fashion and since few disks are smaller when compared to other we are running into issues. Please refer below doc: https://blog.cloudera.com/how-to-use-the-new-hdfs-intra-datanode-disk-balancer-in-apache-hadoop/ We can use the parameter for available space. Usually the HDFS balancer uses DataNode balance by specified %. So it considers the overall usage of the DataNode rather than the individual disks on the DataNode.
... View more
11-29-2020
12:09 AM
When we restart the JournalNode Quorum the epoch number will change. We usually see that the errors when the JournalNodes are not in sync. Check for the writer epoch on current dir for JournalNode process, which one of the JournalNodes is lacking we can manually copy the files from working JournalNode and it will pick up. This should happen automatically when we restart the JournalNodes, if not then above is the procedure.
... View more