Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Confusion on Yarn parameter settings

Confusion on Yarn parameter settings

Hi All, 

 

I am really confused with few settings and faced a scenario and spent a lot of time to find the issue and below are my questions - 

 

cluster details :
-----------------------
Our cluster is of 18 nodes, each node is of 240 Gb of RAM and we defined yarn.scheduler.maximum-allocation-mb (The largest amount of physical memory, in MiB, that can be requested for a container) = 44 Gb in our cluster .

User query :
---------------
I see one of the user submitted a query with settings mapreduce.map.memory.mb = 40000Mb ( So query is requesting 40 Gb container)

 

Situation :
---------------
Now cluster is utilizing only 30-40 % of it s resources.

 

Questions :
---------------
1. Since we hardcoded 44gb for each container and user is requesting 40 Gb from cluster, so only 4gb is remaining - is it because of this cluster is not getting utilised ? if so, what about RAM from other nodes (rest of 16 nodes - assuming 2 Resources manager in the cluster).
2. We enabled fair scheduler in our environment but why this is not working after other user submitted jobs ? All the jobs are in hung state. Will fair scheduler be over looked in this case ? Even job requesting 40 Gb container is also hung
3. In hadoop is there anything like 50% of memory will be allocated to running containers and 50% were "reserved" for future containers ? If this the case all the time what about the speed when there larges jobs (I mean if table size is of 50 TB ) ?
4. Suggestion we got is to reduce yarn.scheduler.maximum-allocation-mb (to say 15 Gb), what makes difference here ? when next time user latter submits job requesting 12 Gb container (even though he/she request 40 gb they will get max of 15 Gb)
5. Will priority me more for user settings over cluster settings in any case ?
6. On net I see following recommendation for yarn.nodemanager.resource.memory-mb --> the amount of RAM on the host minus the amount needed for non-YARN-managed work (including memory needed by the DataNode
daemon) . If this is the case say data node process is taking 5 Gb for loading data and impala is using 50 Gb then we need to use 195 Gb to Yarn ??

 

Can someone clear on yarn parameter settings and how to configure them considering my environment. So many parameters too much confusion 

 

Thanks

Kishore 

6 REPLIES 6

Re: Confusion on Yarn parameter settings

Super Collaborator

There are two settings that you need to look at:

yarn.scheduler.maximum-allocation-mb sets the maximum size of a container

- yarn.nodemanager.resource.memory-mb sets the maximum size of the memory available on the node

 

When a request comes in for a container that is larger than the maximum-allocation-mb it will be denied. The application can not be submitted.

 

If you have 240 GB in the host I would expect that the nodemanager would get about 200 GB of that if you just run Yarn on the node. That should allow you to run more than one large container as you have. However running with a 40GB container for MR seems a bit over the top: do you really need all that?

 

If you use DRF than you might not have a memory limitation but a vcores limitation. You have not mentioned anything about that side so I am not sure what you have configured and if that might be the problem or not.

There are also things like the number of applications that can be run and the AM share which could be of influence on what you see.

 

There is a series of blog posts out that should help also with this it starts with:

http://blog.cloudera.com/blog/2015/09/untangling-apache-hadoop-yarn-part-1/

Three parts are out currently part 4 is coming real soon...

 

If you need more help open a support case with us and we can work through setting up the scheduler with you.

 

Wilfred

Re: Confusion on Yarn parameter settings

Hi Wilfred,

 

Thanks for the reply. I will go through blog and get back to you .

 

Thanks

Kishore

Re: Confusion on Yarn parameter settings

Hi Wilfred,

 

Is there a way to restrict number of map/reduce tasks to run on a single container? 

 

Thanks

Kishore

Highlighted

Re: Confusion on Yarn parameter settings

Hi Wilfred,

 

I am  able to find Part 1 and part 2 but not part3 . Can you please check and update on this. 

 

Thanks

Kishore 

Re: Confusion on Yarn parameter settings

Super Collaborator

Re: Confusion on Yarn parameter settings

Super Collaborator

one container is one reducer or one mapper never more than one.

There is no way to "limit" things inside a container since it is a one on one relationship.

 

Wilfred

Don't have an account?
Coming from Hortonworks? Activate your account here