Member since
05-24-2019
55
Posts
1
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1239 | 06-15-2022 07:57 AM | |
1619 | 06-01-2022 07:21 PM |
06-15-2022
07:57 AM
ah ! Can you try to run the below HDFS balancer command , The below command would move the blocks at a decent pace and would not affect the existing jobs nohup hdfs balancer -Ddfs.balancer.moverThreads=5000 -Ddfs.datanode.balance.max.concurrent.moves=20 -Ddfs.datanode.balance.bandwidthPerSec=10737418240 -Ddfs.balancer.dispatcherThreads=200 -Ddfs.balancer.max-size-to-move=100737418240 -threshold 10 1>/home/hdfs/balancer/balancer-out_$(date +"%Y%m%d%H%M%S").log 2>/home/hdfs/balancer/balancer-err_$(date +"%Y%m%d%H%M%S").log you can also refer to the below doc if you need any tuning https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.5/data-storage/content/balancer_commands.html
... View more
06-06-2022
10:56 AM
@clouderaskme Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks!
... View more
10-22-2021
12:25 AM
@PabitraDas The objective is to copy data between two distinct clusters
... View more
06-03-2021
11:45 AM
@Tylenol, Thank you very much for your help.
... View more
05-03-2021
04:36 AM
Hi, Port are configured correctly, i just restarted my vms and fixed the issue. But it happens many times, and now i'm seeing other errors in logs ERROR datanode.DataNode (DataXceiver.java:run(278)) - ovh-cnode19.26f5de01-5e40-4d8a-98bd-a4353b7bf5e3.datalake.ovh:1019:DataXceiver error processing WRITE_BLOCK operation src: /10.1.2.106:34306 dst: /10.1.2.171:1019
java.io.IOException: Premature EOF from inputStream
... View more
04-12-2021
11:52 PM
Hi @JGUI , There is no requirement for deleting the data from the datanode that is going to be decommissioned. Once the DN is been decommissioned all the blocks in the DN would be replicated to a different DN. And is there any error that you are encountering while you are decommissioning ? Typically, HDFS would self-heal and would re-replicate the under-replicated blocks that are due to the DN that is been decommissioned. And NN would start replicating the blocks with the other two replication that is present in HDFS.
... View more