Member since 
    
	
		
		
		09-17-2015
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                436
            
            
                Posts
            
        
                736
            
            
                Kudos Received
            
        
                81
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 5086 | 01-14-2017 01:52 AM | |
| 7348 | 12-07-2016 06:41 PM | |
| 8717 | 11-02-2016 06:56 PM | |
| 2809 | 10-19-2016 08:10 PM | |
| 7082 | 10-19-2016 08:05 AM | 
			
    
	
		
		
		12-27-2016
	
		
		12:47 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 
	Hi all,  
	Has anyone a workaround for this problem ? I have exactly the same case.   
	I have similar issues on the Sandbox 2.5 (VirtualBox-5.1.12-112440-Win - HDP_2.5_virtualbox).  
	I killed
the jobs with putty as root : yarn application -kill
application_1482410373661_0002 but they are still visible on Ambari. 
 [root@sandbox ~]# yarn application -kill application_1482410373661_0002
16/12/24 12:26:40 INFO impl.TimelineClientImpl: Timeline service address: http://sandbox.hortonworks.com:8188/ws/v1/timeline/
16/12/24 12:26:40 INFO client.RMProxy: Connecting to ResourceManager at sandbox.hortonworks.com/172.17.0.2:8050
16/12/24 12:26:40 INFO client.AHSProxy: Connecting to Application History server at sandbox.hortonworks.com/172.17.0.2:10200
16/12/24 12:26:44 WARN retry.RetryInvocationHandler: Exception while invoking ApplicationClientProtocolPBClientImpl.getApplicationReport over null. Not retrying because try once and fail.
org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException: Application with id 'application_1482410373661_0002' doesn't exist in RM.
  
	I've found an issue corresponding :  
	Tez
client keeps trying to talk to RM even if RM does not know about the
application  
	https://issues.apache.org/jira/browse/TEZ-3156  
	This patch should be included as it was fixed for version 0.7.1  
	In the log (Ambary query) I can read 993 time :  
	INFO
: Map 1: 0/1 Reducer 2: 0/2 
	  
	The query is the proposed in the tutorial : (
	http://fr.hortonworks.com/hadoop-tutorial/hello-world-an-introduction-to-hadoop-hcatalog-hive-and-pig/#section_4) 
 <code>SELECT truckid, avg(mpg) avgmpg FROM truck_mileage GROUP BY truckid;
  Any idea how to clear the history and restart without the running state ?  Thanks in advance  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-08-2016
	
		
		12:02 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hue is not included with the current version of the sandbox.  All activities are done either through Ambari or from the OS prompt.  If you want to use Hue, you would have to "side load" it onto your sandbox.  I am sure there are instructions as to how to do that out on the Internet.  I did not do that.  We want to stay "stock" Hortonworks. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-09-2016
	
		
		05:54 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 You are invoking the API to stop nodemanager (not put in maintenance mode).   To put it in maintenance mode, try below:  curl -u admin:OpsAm-iAp1Pass -H "X-Requested-By:ambari"-i -X PUT -d '{"RequestInfo":{"context":"Turn On Maintenance Mode For NodeManaager"}, "Body":{"HostRoles":{"maintenance_state":"ON"}}}' http://viceroy10:8080/api/v1/clusters/et_cluster/hosts/serf120int.etops.tllsc.net/host_components/NODEMANAGER   
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-04-2016
	
		
		05:45 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 yes, that worked. Addition of Zeppelin did not work though  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-16-2016
	
		
		04:09 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks for your comment. I just solved the problem after 2 days of struggling. There reason was the proxy settings set on my machine by the company I work for. I just added 'sandbox.hortonworks.com' domain name to the proxy bypass list. Also, in order to make webhdfs connection to sandbox from another CentOS VM I added  'sandbox.hortonworks.com' to no_proxy variable at /etc/bashrc of the CentOS and it worked! Thanks 🙂 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-04-2016
	
		
		03:32 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Davide Vergari  Could you post it as an article?  Thanks for sharing this.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-02-2016
	
		
		02:28 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @vbhoomireddy are you still having issues with this? Can you accept the best answer or provide your own solution? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-26-2016
	
		
		06:50 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		4 Kudos
		
	
				
		
	
		
					
							 Couple of options:  1. From Ambari, to smoke test components one at a time, you can select "Run Service Check" from the "Service Actions" menu for that component  2. You can also invoke the smoke test via API  https://cwiki.apache.org/confluence/display/AMBARI/Running+Service+Checks  3. You can manually run the validation checks provided in the doc:  https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_installing_manually_book/content/rpm_validating_the_core_hadoop_installation.html 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-07-2017
	
		
		10:46 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Using KEYRING was a state of art at the moment Kerberos was bundled for RHEL7. However moving forward into the world of containers using KEYRING becomes a challenge thus SSSD is building internal ticket cache that will be supported by  the system Kerberos libraries. So in general the recommendation nowadays is to use native OS Kerberos libraries, they are most recent and will provide latest functionality and experience.   
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













