Member since 
    
	
		
		
		01-04-2019
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                77
            
            
                Posts
            
        
                27
            
            
                Kudos Received
            
        
                8
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 4020 | 02-23-2018 04:32 AM | |
| 1534 | 02-23-2018 04:15 AM | |
| 1372 | 01-20-2017 02:59 PM | |
| 2039 | 01-18-2017 05:01 PM | |
| 5390 | 06-01-2016 01:26 PM | 
			
    
	
		
		
		12-29-2020
	
		
		07:33 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Below export command worked for me.  CREATE table departments_export (departmentid int(11), department_name varchar(45), created_date T1MESTAMP);  sqoop export --connect jdbc:mysql://<host>:3306/DB --username cloudera --password *** \  --table departments_export \  --export-dir '/user/cloudera/departments_new/*' \  -m 1 \  --input-fields-terminated-by ',';  Sample input: 103,Finance,2020-10-10 10:10:00 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-29-2017
	
		
		07:14 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 check this https://oozie.apache.org/docs/4.0.0/WebServicesAPI.html#Job_Log  You can use curl to run the rest api.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-12-2017
	
		
		08:18 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @gnovak @tuxnet Would resource sharing still work if ACLs are configured for separate tenant queues? If ACLs are different for Q1 and Q2, will it still support elasticity and preemption?  Could you also please share the workload/application details that you used for these experiments? I am trying to run some experiments to do a similar test for elasticity and preemption of capacity schedulers. I am using a simple Spark word count application on a large file for the same, but I am not able to get a feel of resource sharing among queues using this application.   Thanks in advance.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-19-2017
	
		
		01:51 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 We were ultimately able to get everything back in shape, but it wasn't pretty.  Too many steps to detail here.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-14-2016
	
		
		03:58 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		3 Kudos
		
	
				
		
	
		
					
							 @Pranay Vyas,  When enabling Kerberos, Ambari set to integrated with an MIT KDC, Active Directory, and soon FreeIPA.  This setting allowing Ambari to interact with the specific KDC as needed.    In the case of Active Directory, Ambari uses the Active Directory's LDAP interface, via the LDAPS protocol.  During the enable Kerberos workflow, the user needs to supply details about this interface (LDAPS URL, container DN, and administrative credentials).  Ambari can also be configured to set certain properties on the accounts it creates while enabling Kerberos. Note that the protocol MUST be LDAPS since Active Directory requires a secure connection in order for a password to be set or updated on an account in the domain.   As part of this process, Ambari will internally create and distribute the keytab files that are needed.  This can be done because Ambari generates and temporarily holds on to the passwords for each account it creates in the Active Directory.  Once the process is complete, the passwords are lost and cannot be retrieved. However the keytab files will exist and be distributed, so the passwords are not needed.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-05-2016
	
		
		08:03 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks, I solve the question.  The solution is to increase the core number for Ambari.  Tez and other service are working well.  Thank you!     
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-11-2016
	
		
		09:45 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I had to unkerberize and rekerberize the cluster, now it works! 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-08-2016
	
		
		12:19 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 my assumption was correct
the datanodes (prbly everynode) will have a uuid which was same and hence this issue,
i removed the install software, diectories and files, then reregisterd which worked fine later 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-31-2017
	
		
		11:37 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 As we have been bitten by the AD issues mentioned by @Pranay Vyas. I thought I'd expand upon the issue.  We wanted two clusters as similar as possible for DR purposes and was looking at using different AD OU's but the same cluster name. Please note as in HDP 2.5.5 Ambari 2.4.2, keytabs will be generated following the "name-cluster-name" pattern (i.e. ambari-qa-sandpit).  You can create the two sets of AD principals but it fails (usually 
around Zookeeper) with the issue "client not found in kerberos database"
 even though you can see the entities in AD or via an ldapsearch. This means by default you can't have two clusters with the same name connected to the same AD.  We didn't investigate changing the kerberos naming pattern but this could possibly fix the issue. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-02-2016
	
		
		07:51 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 update: the problem is solved in newer ODBC driver. The usage of _HOST for kerberos principal was introduced from 2.0.4 version onwards.   Regards  Pranay Vyas 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













