Member since 
    
	
		
		
		08-08-2017
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                1652
            
            
                Posts
            
        
                30
            
            
                Kudos Received
            
        
                11
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 1915 | 06-15-2020 05:23 AM | |
| 15442 | 01-30-2020 08:04 PM | |
| 2063 | 07-07-2019 09:06 PM | |
| 8097 | 01-27-2018 10:17 PM | |
| 4563 | 12-31-2017 10:12 PM | 
			
    
	
		
		
		01-30-2018
	
		
		03:49 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Michael Bronson , i changed the script, since , wasnt parsinf the consumer ( btw . grear script - thanks)  topico="entrada"  for i in `/usr/hdp/current/kafka-broker/bin/zookeeper-shell.sh sr-hadctl-xt01:2181 ls /consumers 2>&1 | grep consumer | cut -d "[" -f2 | cut -d "]" -f1 | tr ',' "\n"`    do    /usr/hdp/current/kafka-broker/bin/zookeeper-shell.sh sr-hadctl-xt01:2181 ls /consumers/$i/offsets 2>&1 | grep $topico    if [ $? == 0 ]    then   echo $i   fi    done 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-30-2017
	
		
		10:39 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Michael Bronson    Yes, setting the parameter to 60 minutes will cause the trash to get cleared for the deleted content after 60 minutes.  Example:    if we delete a file with name "/home/admin/test.txt"   at 1:00 PM then with the 60 minutes trash interval that file will get cleared from the .Trash directory at 2:00 PM  .  But if you want immediate deletion then -skipTrash option will be best as it will bypass trash,    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-26-2017
	
		
		01:08 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 from the article  - How to identify what is consuming space in HDFS   * ( link  https://community.hortonworks.com/articles/16846/how-to-identify-what-is-consuming-space-in-hdfs.html ) ,   by running the script ( from the article ) , we can see who take the most space   so in our case - spark-history take the most space , and we deleted the logs/files from Ambari-GUI     
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-26-2017
	
		
		01:10 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 the problem was solved , we see wrong configuration in host file /etc/hosts ( wrong host IP address )   and by edit the host file , we fixed also the DNS configuration  , and this solved the problem 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-26-2017
	
		
		01:14 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Jay , first thanks a lot for the great support , actually we solved it by re-configure the worker IP with the previous IP , and then restart the worker host , after server go's , data-node show alive on all workers and worker is part of the cluster   
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-23-2017
	
		
		05:54 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 hi Jay , I am really got loss here , what we can do next? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-23-2017
	
		
		01:44 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 we solved this issue by do the following:  we notice that no SSH from the host with ( ambari-server ) to all machines in the cluster ,   so we copy the public key from the master machine ( ambari-server ) to each machine in the cluster , and restart the node that ( Standby NameNode ) was downe    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-22-2017
	
		
		01:11 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 +1 for the answer , I will test it on my host  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-21-2017
	
		
		04:58 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@Michael Bronson m -rf -> This is a Linux/Unix based command which will only delete your Unix/Lrinux based directory created in Unix/Linux file system.   Whereas  hdfs dfs -rmr /DirectoryPath -> Is for deletion of files/dirs in HDFS filesystem.   Incase I miss interpreted your question then and you mean to ask me what is difference between "hdfs dfs -rmr" and "hdfs dfs -rm -rf" then the later one doesn't exist as there is no "-f" parameter to rm command in HDFS filesystem.   We only have "-r" as an option for rm command in HDFS to delete the dir and files.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-21-2017
	
		
		10:35 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 we fix this issue by restart the ambari-server as:  ambari-server restart , then we could start the standby resource manager  
						
					
					... View more