Member since 
    
	
		
		
		07-19-2020
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                162
            
            
                Posts
            
        
                16
            
            
                Kudos Received
            
        
                11
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 783 | 05-01-2025 05:58 AM | |
| 850 | 04-29-2025 09:43 AM | |
| 883 | 04-28-2025 07:01 AM | |
| 1242 | 10-22-2024 05:23 AM | |
| 1410 | 10-11-2024 04:28 AM | 
			
    
	
		
		
		06-03-2025
	
		
		07:40 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @G_B   It could be a issue with your JDK version. Compare the JDK version in your working and non-working datanodes and try to upgrade/downgrade accordingly. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-02-2025
	
		
		01:51 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @shubham_sharma ,  i've tried to reproduce the issue creating a test avro table, quering it i've found that generate close_wait socket.  Thanks a lot. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-29-2025
	
		
		09:43 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @MaraWang   The rebase to HBase 2.6.0 is planned for upcoming CDP releases. We recommend monitoring our release notes for updates regarding this change. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-28-2025
	
		
		07:05 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Shelton Please read my previous answer carefully. None of the properties provided by you are in hbase codebase 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-25-2025
	
		
		01:42 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 I found a solution for this trouble. I removed kerberos DB with kdb5-util destroy and I recreate it again kdb5-util create -s.  Other thing what I found was that, when I firstly created admin cloudera principal I used cloudera-scm instead of cloudera-scm/admin. I am not sure if this could caused problem, but after destroying old DB and created cloudera-scm/admin, generating is working properly.     
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-24-2025
	
		
		11:53 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @CloudSeeker7 ,  If you can ask the question with example, it would be helpful to check the issue you faced. Then we can find the possible root causes.  Please provide the bad record and good record example you are facing.    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-17-2024
	
		
		12:41 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @JSSSS   The error is this "java.io.IOException: File /user/JS/input/DIC.txt._COPYING_ could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation."  All the 3 datanode according to the log are excludeNodes=[192.168.1.81:9866, 192.168.1.125:9866, 192.168.1.8> with replication factor of 3 , writes should succeed to all the 3 datanodes else  the write fails.    The cluster may have under-replicated or unavailable blocks due to excluded nodes HDFS cannot use these nodes, possibly due to:   Disk space issues.  Write errors or disk failures.  Network connectivity problems between the NameNode and DataNodes.   1. Verify if the DataNodes are live and connected to the NameNode    hdfs dfsadmin -report     Look for the "Live nodes" and "Dead nodes" section If all 3 DataNodes are excluded, they might show up as dead or decommissioned.  Ensure the DataNodes have sufficient disk space for the write operation  df -h  Look at the HDFS data directories (/hadoop/hdfs/data)  If disk space is full, clear unnecessary files or increase disk capacity  hdfs dfs -rm -r /path/to/old/unused/files  View the list of excluded nodes  cat $HADOOP_HOME/etc/hadoop/datanodes.exclude  If nodes are wrongly excluded:   Remove their entries from datanodes.exclude.   Refresh the NameNode to apply changes  hdfs dfsadmin -refreshNodes  Block Placement Policy:  If the cluster has DataNodes with specific restrictions (e.g., rack awareness), verify the block placement policy  grep dfs.block.replicator.classname $HADOOP_HOME/etc/hadoop/hdfs-site.xml  Default: org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault    Happy hadooping  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-14-2024
	
		
		02:53 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @manyquestions Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-04-2024
	
		
		04:47 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @MaraWang  
 Have you been able to resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.  
    
						
					
					... View more