Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Yarn RM overallocates vcores on a Single Node manager

Yarn RM overallocates vcores on a Single Node manager

New Contributor

Hi,

When we run any Yarn job, one of the node managers is over allocated by the RM. More number of containers are being scheduled and launched by the RM in that nodemanager. This is impacting our jobs SLA. When we stopped the nodemanager service on that machine, and when I re ran the job, the containers are distributed properly.

Could any one help me on this. We are using HDP2.4 version. We are not using 'Fair policy' and not using Preemption as well.

issue1.jpgissue2.jpg
1 REPLY 1
Highlighted

Re: Yarn RM overallocates vcores on a Single Node manager

Hello

I suppose your standard container size is about 4Gb. Unless you are using cgroups, yarn only allocates based on memory settings, in your scenario 119 containers for 476Gb available is 4G per container. If you want fine grained control on cpu scheduling you will need to configure Yarn to use cgroups.

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.1/bk_yarn-resource-management/content/ch_cgro...