Member since 
    
	
		
		
		09-25-2015
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                109
            
            
                Posts
            
        
                36
            
            
                Kudos Received
            
        
                8
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 3457 | 04-03-2018 09:08 PM | |
| 5363 | 03-14-2018 04:01 PM | |
| 12688 | 03-14-2018 03:22 PM | |
| 4236 | 10-30-2017 04:29 PM | |
| 2194 | 10-17-2017 04:49 PM | 
			
    
	
		
		
		04-03-2018
	
		
		09:12 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 hi @Aishwarya Dixit you can gracefully shutdown the region server, that will trigger hbase Mater to perform a bulk assignment of all regions hosted by that region server.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-03-2018
	
		
		09:10 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @Anurag Mishra please accept the answer if it resolved your issue. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-03-2018
	
		
		09:08 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Hi @Venkata Sudheer Kumar M CPU is capable of running multiple containers if the jobs are not cpu intensive.   The stack advisor is only recommending to not go beyond "CPU(s) * 2". However, there is nothing stopping you from configuring higher.  if you observe your container concurrency metrics and CPU utilization, you can identify your threshold of vcores to 1 CPU and set it accordingly. Note: yarn.nodemanager.resource.cpu-vcores would only be applicable if you enable CPU Scheduling. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-19-2018
	
		
		09:01 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @Anurag Mishra You can use value of 'yarn.resourcemanager.cluster-id' in jobTracker.  # grep -A1 'yarn.resourcemanager.cluster-id' /etc/hadoop/conf/*
jobTracker=yarn-cluster  However, "Failing over to rm2" is just a "INFO" message , that indicates rm1 is Standby.   Your issue with oozie spark2 action would be different. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-19-2018
	
		
		08:41 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @Bijay Deo You may require hotfix on HDP with OOZIE-2606 OOZIE-2658 OOZIE-2787 OOZIE-2802.  Please open a support case.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-19-2018
	
		
		06:36 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @Aishwarya Dixit Did it work ?   You can always Shutdown the Region Server process, and Hbase Master will reassign all Regions to a different Region Server host. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-14-2018
	
		
		04:08 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Please accept an Answer, so that we can mark this request close. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-14-2018
	
		
		04:01 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Do you have the YARN Resource Manager screenshot when you run 3 mapred job?   http://<Active_RM_HOST>:8088/cluster/scheduler  From what i read from screen shots,  Maximum AM Resource 20% i.e. 20% of 391GB = 78GB.   value of: yarn.app.mapreduce.am.resource.mb and tez.am.resource.memory.mb will determine how many AMs can fit in to run concurrently. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-14-2018
	
		
		03:42 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @Alpesh Virani   Can you also share the Resource Manager UI screenshot. This will tell what is the actual usage for your queue.  http://<Active_RM_HOST>:8088/cluster/scheduler 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-14-2018
	
		
		03:33 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @Alpesh Virani  There are several possibilities.  When you have multiple hive sessions open with execution engine "mr", can you tell us:  1. How much resources you have used / available to run in "default" queue ?   Check this on YARN RM UI > Scheduler   2. If you have 100% available in "default" queue, check the "am" container size "yarn.app.mapreduce.am.resource.mb" and check the "Maximum AM Resource" for "default" queue,   see if queue has enough resources to run multiple "am" "mr" containers.  
						
					
					... View more