Member since 
    
	
		
		
		01-04-2019
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                77
            
            
                Posts
            
        
                27
            
            
                Kudos Received
            
        
                8
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 4020 | 02-23-2018 04:32 AM | |
| 1534 | 02-23-2018 04:15 AM | |
| 1372 | 01-20-2017 02:59 PM | |
| 2038 | 01-18-2017 05:01 PM | |
| 5388 | 06-01-2016 01:26 PM | 
			
    
	
		
		
		02-23-2018
	
		
		04:32 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Raj B  If you have both the clusters up and running, you can export tables from one cluster to another using Hive import export commands if the databases are not very large.  https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ImportExport 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-23-2018
	
		
		04:15 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							@Giuseppe D'Agostino All HDP packages primarily gets installed in /usr/hdp location. You can create a mount pointing to disk named /usr/hdp. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-29-2017
	
		
		07:14 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 check this https://oozie.apache.org/docs/4.0.0/WebServicesAPI.html#Job_Log  You can use curl to run the rest api.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-29-2017
	
		
		05:35 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Since you are talking about secondary NameNode, a Secondary NameNode will never act as a Name/meta data service provider even if you shutdown primary Namenode. You will have to switch Secondary Namenode to standby NameNode using HA.   You can read here -  https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html#Secondary_NameNode.  One way to check if your secondary NN has all latest fs image is by checking the size of CURRENT directory of NN and SNN.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-29-2017
	
		
		05:28 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 1) With Oozie UI you can look at the status of all past workflows.   2) If you have smartsense activity explorer and analyzer setup, you can query all the job that ran from activity.job table within given specified period and job name /type containing "oozie" in it.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-27-2017
	
		
		01:16 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Br Hmedna You are trying to export ORC data into mysql without converting it to text. You should use sqoop hive export to do this. Look at this link  https://community.hortonworks.com/questions/22425/sqoop-export-from-hive-table-specifying-delimiters.html 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-27-2017
	
		
		01:12 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Kumar Veerappan  You will have to create an watcher/alert scripts that identifies which NN is active and alert/email if NN flips.   Namenode service provides JMX which has information on which NameNode is active. Your watcher script can query this data to identify if NN failovers.  name" : "Hadoop:service=NameNode,name=NameNodeStatus",
    "modelerType" : "org.apache.hadoop.hdfs.server.namenode.NameNode",
    "State" : "active",
    "NNRole" : "NameNode",
    "HostAndPort" : "host1:8020 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-27-2017
	
		
		01:06 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Mohammad Shazreen Bin Haini   If you are using ranger to manage permissions, there should 2 default policies. 1) HDFS policy - that gives hive full permission to read/write to /apps/hive/warehouse directory. 2) Hive policy - that gives hive full permission "hive" user to create, drop databases, tables. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-08-2017
	
		
		08:04 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I have seen this error for my customer. Issue was with memory footprint on the node hosting Zeppelin/Livy.   Free memory was 1GB. This was since livy had many dead session which were not releasing memory. Deleting livy sessions helped increasing free memory. YOu can use livy rest api to view sessions and delete dead sessions.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-25-2017
	
		
		07:07 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 I think pre-emption is within leaf queues under same parent queue. That is why this behavior is observed.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













