Member since 
    
	
		
		
		01-19-2017
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                3676
            
            
                Posts
            
        
                632
            
            
                Kudos Received
            
        
                372
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 469 | 06-04-2025 11:36 PM | |
| 996 | 03-23-2025 05:23 AM | |
| 530 | 03-17-2025 10:18 AM | |
| 1856 | 03-05-2025 01:34 PM | |
| 1238 | 03-03-2025 01:09 PM | 
			
    
	
		
		
		02-22-2016
	
		
		07:06 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Amit Sharma  Good question.. Not sure how is it related to protocol version ! at least the error message is definitely wrong. This is the only workaround I got.   The issue is related to  restrict access to the Hive metastore service by allowing it to impersonate only a subset of Kerberos users. This can be done by setting the hadoop.proxyuser.hive.groups property in core-site.xml on the Hive metastore host.  The issue has something to do with org.apache.thrift.protocol client_protocol, My reasoning was to give the hive user  a wildcard  privilege like the root.There is a jira out there as I see it resolved  your problem then you can accept it as an answer cheers! 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-21-2016
	
		
		10:43 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Amit Sharma  Gracefully stop all the services using Ambari.  Restart all  the services using Ambari @times  after reboot of the server you will need to manually start  those service that wont be started by Ambari  but subsequent Ambari  startall /stopall will work correctly.  Keep me posted  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-21-2016
	
		
		09:50 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Amit Sharma  If you have ambari you can add the properties via: services->HDFS->configs->advanced->custom core-site   Add the below properties   hadoop.proxyuser.hive.hosts=* 
hadoop.proxyuser.hive.groups=*   Than restart all the affected the services. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-20-2016
	
		
		07:28 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Prakash Punj  Have a look at this doc Fine-Grained Permission with HDFS ACLs 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-18-2016
	
		
		08:12 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Cecilia Posadas  Add  the mysql-connector-java.jar library into the lib directory located inside the oozie project root directory where the job.properties and workflow.xml files are located.  Better solution is to add  the mysql-connector-java-*.jar once to share/lib/sqoop directory in HDFS.  Please do that and let me know  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-17-2016
	
		
		06:01 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Aditya Goyal  I would opt to install a MySQL server to resolve the stalemate! 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-16-2016
	
		
		05:49 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 @Ojustwin Naik  If you are using  HDP 2.3  here is the solution there was a kira already resolved. link  Let me know if that helped 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-15-2016
	
		
		10:13 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @wei yang  Reading from your logs I see..  - 3 failed attempts to allocate resources on host   - Blacklisted host  Lnx1(.)localdomain(.)com   - Container exited with a non-zero exit code 143 is typical of a Memory configuration check your yarn-site.xml  This should help you understand the mechanism or hadoop concept of a blacklisted node to indicate that a node is unhealthy  @link 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-15-2016
	
		
		11:38 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Aditya Goyal  As reiterated in my earlier posting the reason is the derby database is down.   Start you DB so that Oozie can load it's config also see oozie config 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-15-2016
	
		
		07:13 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Majid Ali Syed Amjad Ali Sayed  Did you check all the above steps ignoring any will cause your deployment to stall! 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













