Member since 
    
	
		
		
		05-24-2019
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                56
            
            
                Posts
            
        
                1
            
            
                Kudos Received
            
        
                2
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 1839 | 06-15-2022 07:57 AM | |
| 2253 | 06-01-2022 07:21 PM | 
			
    
	
		
		
		06-15-2022
	
		
		07:57 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 ah ! Can you try to run the below HDFS balancer command , The below command would move the blocks at a decent pace and would not affect the existing jobs  nohup hdfs balancer -Ddfs.balancer.moverThreads=5000 -Ddfs.datanode.balance.max.concurrent.moves=20 -Ddfs.datanode.balance.bandwidthPerSec=10737418240 -Ddfs.balancer.dispatcherThreads=200 -Ddfs.balancer.max-size-to-move=100737418240 -threshold 10 1>/home/hdfs/balancer/balancer-out_$(date +"%Y%m%d%H%M%S").log 2>/home/hdfs/balancer/balancer-err_$(date +"%Y%m%d%H%M%S").log     you can also refer to the below doc if you need any tuning  https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.5/data-storage/content/balancer_commands.html 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-14-2022
	
		
		10:14 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hello @wazzu62 , Kindly share what is the error message that you get while you run the hdfs balancer command as we haven't removed the CLI command on HDP 3.1.5.0.     
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-01-2022
	
		
		07:21 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Hello @clouderaskme ,     From the above error message , we could tell that you would be hitting  SOLR-3504. The issue is due to limitation from Solr side where 1 shard can only index upto 2.14 Billion.    The solution would be to create a new ranger_audits collection with 2 shards instead of 1. As it can index more documents.     You may also try to delete the older records if the solr instance is still up and running and see if the issue been resolved.      Please modify the http with https if SSL is enabled and check the port as per your environment and run the below command.  curl -ikv --negotiate -u: "http://$(hostname -f):8886/solr/ranger_audits/update?commit=true" -H "Content-Type: text/xml" --data-binary "<delete><query>evtTime:[* TO NOW-15DAYS]</query></delete>"         There is another method of splitting the shard. Please refer to the below doc  https://my.cloudera.com/knowledge/ERROR-quotToo-many-documents-composite-IndexReaders-cannot?id=74738 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-20-2021
	
		
		08:56 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hello @PrernaU ,     Unfortunately, ViewFS is not yet a supported feature on CDP as federation is not supported yet. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-11-2021
	
		
		06:21 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Looks like the fall back mechanism isn't been added      A fall back configuration is required at destination when running DistCP to copy files between a secure and an insecure cluster.  Adding the following property to the advanced configuration snippet (if using Cloudera Manager) or of not, add it directly to the HDFS core-site.xml:  <property>
<name>ipc.client.fallback-to-simple-auth-allowed</name>
<value>true</value>
</property>     https://my.cloudera.com/knowledge/Copying-Files-from-Insecure-to-Secure-Cluster-using-DistCP?id=74873    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-11-2021
	
		
		08:45 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Ah ! got it .. thanks for update ! Yeah can you refer to the article once and can you also try to use copy from source namenode to destination namenode like this      hdfs://nn1:8020/foo/a
hdfs://nn1:8020/foo/b  https://hadoop.apache.org/docs/r3.0.3/hadoop-distcp/DistCp.html  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-11-2021
	
		
		08:27 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Can you please have a check if you have made the changes as per the below doc      https://docs.cloudera.com/cdp-private-cloud/latest/data-migration/topics/rm-migrate-securehdp-insecurecdp-distcp.html     As I see that you are migrating data from (Secured)HDP cluster to (unsecured)CDP cluster. Please correct me if my understanding is incorrect.     
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-11-2021
	
		
		03:19 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @vciampa ,     Looks like the arguments that are been passed is invalid  Invalid arguments: Failed on local exception: java.io.IOException: java.io.EOFException; Host Details : local host is: "server2.localdomain/10.x.x.x"; destination host is: "svr1.local":9866;     Can you try to use source://nameservice:port dest://nameservice:port and try to run the distcp once. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-26-2021
	
		
		01:04 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @enirys ,     From the below log snippet, I could think that the DN is running on the regular ports such as 50010 and 50075 (as per CM). Please confirm from your end.     2021-04-22 23:30:06,918 WARN  conf.Configuration (Configuration.java:getTimeDurationHelper(1659)) - No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS
2021-04-22 23:30:06,924 ERROR datanode.DataNode (DataNode.java:secureMain(2692)) - Exception in secureMain
java.lang.RuntimeException: Cannot start secure DataNode without configuring either privileged resources or SASL RPC data transfer protection and SSL for HTTP.  Using privileged resources in combination with SASL RPC data transfer protection is not supported.     If they are running with the above ports are unprivileged ports on the OS. Could you try use the port 1024 (generally we do use 1004 and 1006)     Or the other option is to enable SASL RPC data transfer protection and TLS     The first option should be an easier one. Please try and let me know. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-12-2021
	
		
		11:52 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @JGUI , There is no requirement for deleting the data from the datanode that is going to be decommissioned. Once the DN is been decommissioned all the blocks in the DN would be replicated to a different DN.     And is there any error that you are encountering while you are decommissioning ?  Typically, HDFS would self-heal and would re-replicate the under-replicated blocks that are due to the DN that is been decommissioned. And NN would start replicating the blocks with the other two replication that is present in HDFS. 
						
					
					... View more