Member since 
    
	
		
		
		02-02-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                583
            
            
                Posts
            
        
                518
            
            
                Kudos Received
            
        
                98
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 4179 | 09-16-2016 11:56 AM | |
| 1723 | 09-13-2016 08:47 PM | |
| 6912 | 09-06-2016 11:00 AM | |
| 4153 | 08-05-2016 11:51 AM | |
| 6222 | 08-03-2016 02:58 PM | 
			
    
	
		
		
		04-18-2016
	
		
		10:01 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Alex Raj   You are trying to connect to metastore port 9083 which doesn't understand the thrift calls, can you please check whether you have running hiveserver2 instance in your cluster probably on 10000 or 10001?  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-15-2016
	
		
		02:36 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @krishna sampath   Ok, so for integrating custom java program to hbase, you probably need to add below path into your client java env for hbase. I believe it contains all required jars to connect.  /usr/hdp/current/hbase-client/lib/  Or you also look on below path for more habse jars.  /usr/hdp/current/hbase-master/lib/  For hive you can find all jars from below location.  /usr/hdp/current/hive-client/lib/  Will that help? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-15-2016
	
		
		02:14 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @krishna sampath   Hi, Can you please explain it bit more? like what is the use case and what jars and where you want to put them? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-15-2016
	
		
		11:42 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Saurabh Kumar   Hi, did that property worked for you? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-15-2016
	
		
		11:39 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@gsharma Yes ,you can try forcing checkpoint first but I doubt if this works, also can you check whether you have sufficient local disk space of SNN node plus on hdfs also? if disk space is not a prob then we can restart SNN since it will not cause any issue to PNN as well as on running jobs. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-15-2016
	
		
		11:28 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Marco Gaido   I doubt if compression cause 13 hrs delay, rather something else going from query planing side. Can you please share what exactly it doing in last stage? may be you can check it on app master UI page while your query running. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-15-2016
	
		
		10:26 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @tunglq it   So there is no straight forward way to identify which file was written by which job, however we need little bit hand works to achieve this by parsing all job logs through a script and should look for that specific file path or name occurrences in the logs. In most cases if you ran a map reduce job then it is likely that Application master container logs should have that information, if not then better if you parse each job containers logs one by one through a script.  will that help? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-15-2016
	
		
		08:50 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Maeve Ryan   Hi, please let me know if you are still stuck in this issue. Thanks 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-14-2016
	
		
		04:27 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Mugdha Roy   By default Sqoop doesn't compress your output data, are you by chance using gzip compression? Is this only happening with teradata imports? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-14-2016
	
		
		04:12 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Maeve Ryan   I believe to delete the logs and other non hdfs data you need to login into machine and execute rm command, but for setting up dfs.datanode.du.reserved property you can login to ambari and search for this property in HDFS > config section(please see attached screenshot). However I think the default value of  dfs.datanode.du.reserved is sufficient in most of the cases. Regarding your job whats the data size you are try to process? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













