Member since 
    
	
		
		
		12-28-2015
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                47
            
            
                Posts
            
        
                2
            
            
                Kudos Received
            
        
                4
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 8042 | 05-24-2017 02:14 PM | |
| 3395 | 05-01-2017 06:53 AM | |
| 6421 | 05-02-2016 01:11 PM | |
| 7716 | 02-09-2016 01:40 PM | 
			
    
	
		
		
		07-01-2019
	
		
		07:12 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 hi,man, did you fixed this  problem,i have the same too. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-12-2019
	
		
		09:26 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							Hi Naveen,    If you have limited number of ports available. You can assign port for each application.    --conf "spark.driver.port=4050"  —conf "spark.executor.port=51001"  --conf "spark.ui.port=4005"    Hope it helps    Thanks  Jerry
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-30-2017
	
		
		01:41 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi Naveen,     I realized my understanding is wrong and understoop that system sees "joy" as the user who is trying to write and permissions would be enforced against "joy". So I set up an ACL for joy and my program worked fine.     Now I understand ACL's usage with respect to impersonation. Thanks for the pointer.     Regards,  Niranjan 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-24-2017
	
		
		02:37 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html    I recommend opening a new topic you have any other questions on storage pools. That way this discussion can stay on topic.
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-24-2017
	
		
		02:31 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							After recomission can just add the datanode back and Name node will identify all the blocks that were previously present in this datanode. Once Namenode identifies this information, It will wipe out the third replica that it created during the datanode decomission.    You may have to run hdfs balancer if you format the disks and then recomision it to the cluster which is not a best practise.
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-01-2017
	
		
		06:53 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I had to check grant in hr_role instead of emp_role. This is the solution for this question. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-30-2016
	
		
		12:09 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi,     Encryption at rest is used for protecting your data from an unauthorized user who has no read permission in hdfs or has no access to cluster and is trying to read it from the disk directly.      In your example the directory /tmp/user1zone1 has read access for all cluster users and hence user2 is allowed to read from it.   drwxr-xr-x - user1 supergroup 0 2016-02-10 02:42 /tmp/user1zone1 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-28-2016
	
		
		05:39 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thank you all for your time, logical workaround sounds good to me. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-18-2016
	
		
		02:10 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 You won't save HDFS filesystem space by "archiving" or "combining" small files. In many scenarios you will get a performance boost from combining. You will also reduce the metadata overhead on the namenode by combining as well.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        







