Member since 
    
	
		
		
		09-24-2015
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                76
            
            
                Posts
            
        
                32
            
            
                Kudos Received
            
        
                10
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 1806 | 02-24-2017 09:17 AM | |
| 5863 | 02-13-2017 04:49 PM | |
| 2448 | 09-26-2016 06:44 AM | |
| 1459 | 09-16-2016 06:24 AM | |
| 2485 | 08-25-2016 02:27 PM | 
			
    
	
		
		
		09-13-2019
	
		
		01:37 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							  Short Description:  This article will help to renew kerberos ticket with Falcon.     Article  It has been seen that Falcon server fails to perform operations after kerberos credentials expire and following exception occur.         Caused by: org.apache.hadoop.security.authentication.client.AuthenticationException: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)           To solve this issue, we must set following parameter in Falcon startup.properties through ambari-ui to revalidate kerberos credentials. Value for this properties is in seconds.   Stop Falcon server  Set parameter : *.falcon.service.authentication.token.validity=<value in seconds>  Start Falcon server   Note: This article is for version greater than HDP-2.5.* . 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		03-10-2017
	
		
		10:36 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @mayki wogno Just try to see if by deleting oozie auth token, if it helps you.  rm ~/.oozie-auth-token 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-24-2017
	
		
		09:17 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Sankar T To resolve this issue, following is the workaround :  1. Remove service org.apache.falcon.metadata.MetadataMappingService from *.application.services in falcon startup.properties available in /etc/falcon/conf/  2. Restart falcon server   Hope this helps 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-13-2017
	
		
		04:49 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Abb Code Just try to see if by defining the required variable through global section in workflow.xml helps you. More details about defining global is available at following URL.  https://oozie.apache.org/docs/4.2.0/WorkflowFunctionalSpec.html#a19_Global_Configurations 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-07-2017
	
		
		08:51 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @rahul gulati Earlier I observed that this similar exception occurred at the time of launching of Oozie workflow. Can you try to set following memory related parameter in Oozie workflow.xml with some higher value like 1024mb so that workflow launches successfully.  For e.g:  <property>
  <name>oozie.launcher.mapred.map.child.java.opts</name>
  <value>-Xmx1024m</value>
  </property>  See if this helps you. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-13-2017
	
		
		02:48 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Shihab It looks like that Falcon server did not come up properly due to which Falcon webUI and client is having issues. Can you please share the falcon application log from Falcon logs directory to analyze the Falcon server issue. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-26-2016
	
		
		06:44 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 @alina n   If there is delay in running of job through Oozie, just please check that ResourceManager must not be in overwhelming state and there must be sufficient capacity in cluster to execute jobs.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-16-2016
	
		
		06:24 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 @alina n  It looks like that you need to add value “localhost” or "127.0.0.1" for parameter hadoop.proxyuser.oozie.hosts in  /etc/hadoop/conf/core-site.xml.  Once you add this through ambari-ui, please save and restart the Oozie service.
  <property>
      <name>hadoop.proxyuser.oozie.hosts</name>
      <value>127.0.0.1,localhost,sandbox.hortonworks.com</value>
</property>  Hope this helps you. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-08-2016
	
		
		05:18 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		7 Kudos
		
	
				
		
	
		
					
							 In this article, we will see how to 
perform the mirroring of Hive data and metadata using Falcon from source
 cluster to destination cluster. This article is based on HDP 2.5.  
 Configure Hive   Configure
 source and target Hive by clicking “Hive” from the Ambari Services 
menu, then click “configs” to add following custom properties on Ambari 
UI by scroll down to “Custom hive-site”, click it and then click “Add 
Property”.  Add following property name with value:  hive.metastore.event.listeners = org.apache.hive.hcatalog.listener.DbNotificationListener
hive.metastore.dml.events = true  Press OK to save the changes, then click Restart all the impacted services.  
 Bootstrap Table and DB   Before creating Hive DR mirroring job to replicate Hive data/metadata for a DB or Table, it is required to perform initial bootstrap of Table and DB from source to target cluster.
  Table Bootstrap  For bootstrapping table replication, do an EXPORT of the table in question at source cluster, distcp the export directory to the target cluster, and do an IMPORT at target cluster. Export-Import can be seen here :   https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ImportExport      For example create the table global_sales and insert records:  	hive > 	create table global_sales
	(customer_id string, item_id string, quantity float, price float, time timestamp) 
	partitioned by (country string);
	hive > insert into table global_sales partition (country = 'us') values ('c1', 'i1', '1', 	'1', '2001-01-01 01:01:01');  Start Bootstrap  	## On source cluster :
	hive > export table global_sales to '/user/ambari-qa/export_sql'; 
	$ hadoop distcp hdfs://machine-1-1.openstacklocal:8020/user/ambari-qa/export_sql 
			hdfs://machine-2-1.openstacklocal:8020/user/ambari-qa/import_sql
	## On target cluster :
	hive > import table global_sales from '/user/ambari-qa/import_sql';  Above steps will set up the target table in sync with source table so that the events on the source cluster that modify the table will then be replicated over.    Database Bootstrap  For bootstrapping DB replication, first target DB must be created. This step is expected because DB replication definitions can be set up only on pre-existing DB by users.  Second, we need to export all tables in the source D  B and import them in the target DB, as described in Table bootstrap.    
 Set up source and target cluster staging/working directory   Source cluster:  [root@machine-1-1 ~]# su - falcon  hadoop fs -mkdir -p /apps/falcon/primaryCluster/staging  hadoop fs -mkdir -p /apps/falcon/primaryCluster/working  hadoop fs -chmod 777 /apps/falcon/primaryCluster/staging  Target cluster:  [root@machine-2-1 ~]# su - falcon  hadoop fs -mkdir -p /apps/falcon/backupCluster/staging  hadoop fs -mkdir -p /apps/falcon/backupCluster/working  hadoop fs -chmod 777 /apps/falcon/backupCluster/staging    
 Create cluster entity   Navigate to Falcon UI from Ambari services menu and create source cluster entity using Falcon UI by clicking “Create” -> “Cluster”                Save the source cluster entity by clicking  “Next”->”Save” .        Create target cluster entity using Falcon UI by clicking “Create” -> “Cluster”            Save the cluster entity by clicking “Next”->”Save” .     
  Insert records in source Hive server for replication.   Insert some records in source Hive server to replicate to target Hive server.  	hive > insert into table global_sales partition (country = 'uk') values ('c2', 'i2', '2', 	'2', '2001-01-01 01:01:02');  
  Prepare and submit Hive DR Mirroring    To submit the Hive DR mirroring job, click “Create”->”Mirror”->”Hive” and then fill the required values.
              Click Next -> Save the Hive DR mirror job.  
  Submit and Schedule HiveDR           
  Check output   Once scheduled Hive DR process completed (checked from Oozie UI), verify the target Hive server for output.   Earlier, we inserted two records at source Hive server and now at target Hive server both records are available.        ---       
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		09-05-2016
	
		
		01:21 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks @asitabh kumar  
						
					
					... View more