Member since
05-24-2019
56
Posts
1
Kudos Received
2
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2284 | 06-15-2022 07:57 AM | |
| 2727 | 06-01-2022 07:21 PM |
06-15-2022
07:57 AM
ah ! Can you try to run the below HDFS balancer command , The below command would move the blocks at a decent pace and would not affect the existing jobs nohup hdfs balancer -Ddfs.balancer.moverThreads=5000 -Ddfs.datanode.balance.max.concurrent.moves=20 -Ddfs.datanode.balance.bandwidthPerSec=10737418240 -Ddfs.balancer.dispatcherThreads=200 -Ddfs.balancer.max-size-to-move=100737418240 -threshold 10 1>/home/hdfs/balancer/balancer-out_$(date +"%Y%m%d%H%M%S").log 2>/home/hdfs/balancer/balancer-err_$(date +"%Y%m%d%H%M%S").log you can also refer to the below doc if you need any tuning https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.5/data-storage/content/balancer_commands.html
... View more
06-14-2022
10:14 PM
Hello @wazzu62 , Kindly share what is the error message that you get while you run the hdfs balancer command as we haven't removed the CLI command on HDP 3.1.5.0.
... View more
06-01-2022
07:21 PM
1 Kudo
Hello @clouderaskme , From the above error message , we could tell that you would be hitting SOLR-3504. The issue is due to limitation from Solr side where 1 shard can only index upto 2.14 Billion. The solution would be to create a new ranger_audits collection with 2 shards instead of 1. As it can index more documents. You may also try to delete the older records if the solr instance is still up and running and see if the issue been resolved. Please modify the http with https if SSL is enabled and check the port as per your environment and run the below command. curl -ikv --negotiate -u: "http://$(hostname -f):8886/solr/ranger_audits/update?commit=true" -H "Content-Type: text/xml" --data-binary "<delete><query>evtTime:[* TO NOW-15DAYS]</query></delete>" There is another method of splitting the shard. Please refer to the below doc https://my.cloudera.com/knowledge/ERROR-quotToo-many-documents-composite-IndexReaders-cannot?id=74738
... View more
10-20-2021
08:56 PM
Hello @PrernaU , Unfortunately, ViewFS is not yet a supported feature on CDP as federation is not supported yet.
... View more
05-11-2021
06:21 PM
Looks like the fall back mechanism isn't been added A fall back configuration is required at destination when running DistCP to copy files between a secure and an insecure cluster. Adding the following property to the advanced configuration snippet (if using Cloudera Manager) or of not, add it directly to the HDFS core-site.xml: <property>
<name>ipc.client.fallback-to-simple-auth-allowed</name>
<value>true</value>
</property> https://my.cloudera.com/knowledge/Copying-Files-from-Insecure-to-Secure-Cluster-using-DistCP?id=74873
... View more
05-11-2021
08:45 AM
Ah ! got it .. thanks for update ! Yeah can you refer to the article once and can you also try to use copy from source namenode to destination namenode like this hdfs://nn1:8020/foo/a
hdfs://nn1:8020/foo/b https://hadoop.apache.org/docs/r3.0.3/hadoop-distcp/DistCp.html
... View more
05-11-2021
08:27 AM
Can you please have a check if you have made the changes as per the below doc https://docs.cloudera.com/cdp-private-cloud/latest/data-migration/topics/rm-migrate-securehdp-insecurecdp-distcp.html As I see that you are migrating data from (Secured)HDP cluster to (unsecured)CDP cluster. Please correct me if my understanding is incorrect.
... View more
05-11-2021
03:19 AM
Hi @vciampa , Looks like the arguments that are been passed is invalid Invalid arguments: Failed on local exception: java.io.IOException: java.io.EOFException; Host Details : local host is: "server2.localdomain/10.x.x.x"; destination host is: "svr1.local":9866; Can you try to use source://nameservice:port dest://nameservice:port and try to run the distcp once.
... View more