Member since 
    
	
		
		
		01-06-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                131
            
            
                Posts
            
        
                99
            
            
                Kudos Received
            
        
                3
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 2525 | 03-08-2016 08:34 PM | |
| 5583 | 03-02-2016 07:04 PM | |
| 3135 | 01-29-2016 05:47 PM | 
			
    
	
		
		
		01-19-2024
	
		
		01:55 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 If you put User Limit Factor as 3.5 it will take over the minimum queue capacity and will go towards max capacity of the queue.  Let's say if the Minimum capacity of queue is 20% putting user limit factor of 3.5 will give the user to get resources till 70% if that limit is within the max limit of the queue other the queues max limit will be the limit for the user. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-15-2016
	
		
		09:44 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hello @Ram D,  There are several advantages with Kerberos over LDAP. The most prominent one is - Kerberos is more secured when compared to LDAP. Here's how:  1. Kerberos is conceptualized and implemented as authentication protocol from the beginning where the protecting the user's credential is given utmost importance. Whereas LDAP is actually a directory access protocol (a la telephone directory) and not meant for authentication originally.   2. User's password *never* travels over wire when using Kerberos. Of course, you can secure LDAP communication with SSL but then it is 'encrypted password' which is traveling over wire.  There are couple of reasons why Kerberos has been chosen by Hadoop world as de fecto authentication standard.  Hope this helps. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-07-2016
	
		
		03:34 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Enabling Ranger audit's will show who made the sql call and what query was issued to HS2.  This is more "metadata" centric, the actually data transferred is not logged in any permanent fashion.  That would be the responsibility of the client.  But the combination of the audit (who and what) along with possibly a "hdfs snapshot" can lead to a reproducible scenario. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-15-2016
	
		
		08:47 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		3 Kudos
		
	
				
		
	
		
					
							 @Ram D  Nothing automated, however, you can configure Dynamic Resource Allocation manually, as one time activity:  http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_spark-guide/content/config-dra-manual.html  Some more here:  https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_spark-guide/content/ch_tuning-spark.html 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-04-2016
	
		
		09:53 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 The question is unclear to me but I recommend reading the following three blog posts carefully as they go into great detail about balancer basics, configuration and best practices:  https://community.hortonworks.com/articles/43615/hdfs-balancer-1-100x-performance-improvement.html  https://community.hortonworks.com/articles/43849/hdfs-balancer-2-configurations-cli-options.html  https://community.hortonworks.com/articles/44148/hdfs-balancer-3-cluster-balancing-algorithm.html 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-30-2016
	
		
		02:46 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thank you @Benjamin Leonhardi 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-16-2016
	
		
		10:38 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@Ram D It is upto your cluster planning and how old data you need. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-14-2016
	
		
		03:17 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 use the slider kill-container command; it's how we test slider apps resilience to failure. There's also a built in chaos-monkey in slider; you can configure the AM to randomly kill containers (and/or its own). See Configuring the Chaos Monkey 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-08-2016
	
		
		08:34 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 curl -u admin:password -i -H 'X-Requested-By: ambari' -X PUT -d ' {"RequestInfo":{"context":"Stop NODEMANAGER"}, "Body": {"HostRoles": {"state": "INSTALLED"}}}' http://viceroy10:8080/api/v1/clusters/et_cluster/hosts/$hostname/host_components/NODEMANAGER 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-09-2016
	
		
		05:54 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 You are invoking the API to stop nodemanager (not put in maintenance mode).   To put it in maintenance mode, try below:  curl -u admin:OpsAm-iAp1Pass -H "X-Requested-By:ambari"-i -X PUT -d '{"RequestInfo":{"context":"Turn On Maintenance Mode For NodeManaager"}, "Body":{"HostRoles":{"maintenance_state":"ON"}}}' http://viceroy10:8080/api/v1/clusters/et_cluster/hosts/serf120int.etops.tllsc.net/host_components/NODEMANAGER   
						
					
					... View more