Member since 
    
	
		
		
		09-15-2015
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                294
            
            
                Posts
            
        
                764
            
            
                Kudos Received
            
        
                81
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 2125 | 07-27-2017 04:12 PM | |
| 5420 | 06-29-2017 10:50 PM | |
| 2595 | 06-21-2017 06:29 PM | |
| 3148 | 06-20-2017 06:22 PM | |
| 2770 | 06-16-2017 06:46 PM | 
			
    
	
		
		
		02-15-2017
	
		
		11:44 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		6 Kudos
		
	
				
		
	
		
					
							 @Kshitij Badani - If the user's home directory is encrypted, the user will not be able to delete the file if its not inside its home folder, unless it uses the "-skipTrash" option.   The user should be able to the delete file with "-skipTrash" option.   The problem is that the trash directory for non-encrypted data resides in the user's home directory. Now if user's home is encrypted, un-encrypted data cannot be renamed to this directory, and therefore delete will fail unless used with "-skipTrash".   The trash directory which is in user's home directory, is to ensure quota is correctly calculated and assigned for deleted data. And moving un-encrypted data to EZ is not allowed for security reasons.
If user is encrypting it's home directory, they have to use "-skipTrash" to delete un-encrypted data.   
Another way to look at it is following: A user who has its home in an EZ should never be creating any important data that is un-encrypted.
Please note that deleting encrypted data doesn't have this issue because for encrypted data the trash is within the EZ itself.   Let me know if you have any more doubts. Thanks! 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-15-2017
	
		
		12:40 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		6 Kudos
		
	
				
		
	
		
					
							 Hello @ssathish,  Can you please share the value of hadoop.kms.blacklist.DECRYPT_EEK from /etc/ranger/kms/conf/dbks-site.xml  Looks like the user 'hdfs' might be blacklisted. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-09-2017
	
		
		01:27 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Sure. Please do let me know once we have this feature available. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-09-2017
	
		
		12:20 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Artem Ervits Thank you for the info.   Also, is there an activity page, where we can see how we got points. Like I got one point for this upvote by this user OR you got fifteen points for this accepted answer. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-09-2017
	
		
		12:10 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Like sometimes I see that my points have grown, but dont know why / how have they increased or decreased.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-09-2017
	
		
		12:00 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		6 Kudos
		
	
				
		
	
		
					
							@Raghav Kumar Gautam As from the logs you can see you need to set the dfs.namenode.accesstime.precision value to 3600000. 
The property dfs.access.time.precision is deprecated. Ambari sets the new one dfs.namenode.accesstime.precision to 0, which disables setting access time.
So, setting the new property should resolve the issue. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-08-2017
	
		
		11:54 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Raghav Kumar Gautam- Can you please share HDFS NFS server logs? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-22-2016
	
		
		07:45 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 In the attached hdfs-site.xml the value for both dfs.namenode.http-address and dfs.https.port have the same port 50070, which might have a conflict.   Please set 50071 as the port for dfs.https.port, restart Hadoop and then try using the browser, once the Namenode is out of safemode.   If this does not work, you might want to set the value for dfs.namenode.http-address to 0.0.0.0:50070, restart Hadoop, wait for Namenode out of Safemode and then try hitting http://localhost:50070 in the browser.
Let me know if this helps. Thanks! 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-21-2016
	
		
		11:03 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Is there a way to query HDFS jmx metrics for top users over a period of time. I know nntop does give the details in JMX. But is there a way to query it over a period of time and keep track? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache Hadoop
			
    
	
		
		
		12-21-2016
	
		
		07:33 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 zookeeper server redirects the stderr and stdout to zookeeper.out   As mentioned above you can use ROLLINGFILE which is sent to zookeeper.log 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		- « Previous
- Next »
 
        













