Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

YARN Config Groups settings for yarn scheduler configurations are not honored by YARN

Highlighted

YARN Config Groups settings for yarn scheduler configurations are not honored by YARN

New Contributor

Hi,

Referring the knowledge base article - https://community.hortonworks.com/content/supportkb/202890/yarn-config-group-settings-for-yarn-sched...


I am creating different types of compute notes in my cluster.

worker - m4.xlarge in aws. m4.xlarge has 16G and i want to give 14G to YARN Containers.

biggerworker - m4.2xlarge in aws. m4.2xlarge has 32 G and i want to give 30G to YARN Containers.


I am creating a cluster from Cloudbreak.

I highlighting the blueprints for YARN configurations alone


yarn-site has the following properties for global configuration -


"yarn-site": {
    "properties": {
        "yarn.scheduler.minimum-allocation-mb" : "512",
        "yarn.scheduler.maximum-allocation-mb" : "30720"
    }
  }

And for the config group worker I have the following configuration

"name": "worker",
"configurations": [
  {
    "yarn-site": {
      "properties": {
        "yarn.nodemanager.resource.memory-mb": "14336",
        "yarn.scheduler.maximum-allocation-mb": "14336"
      }
    }
  }


And for the config group biggerworker, I have the following configuration

{
  "name": "biggerworker",
  "configurations": [
    {
      "yarn-site": {
        "properties": {
          "yarn.nodemanager.resource.memory-mb": "30720",
          "yarn.scheduler.maximum-allocation-mb": "30720"
        }
      }
    }


When I create the cluster and then Look at YARN Resource Manager UI

http://<resourcemanager-ip-address>:8088/cluster/nodes

I can see that Mem available for worker is 14G and mem available for biggerworker is 30G.

However, when i upscale a worker and a bigger worker I observe the following:

- YARN Resource Manager UI displays 30G as mem available for both worker and biggerworker.


If I access Ambari UI and modify the yarn.scheduler.maximum-allocation-mb to 31G and restart YARN,

I can see that all workers have 14G and all biggerworker have 30G.


My Observations

- YARN Memory settings for config groups

- are honored when the cluster is created/ a global setting has been changed and YARN is restarted.

- are not honored during upscale.


4 REPLIES 4

Re: YARN Config Groups settings for yarn scheduler configurations are not honored by YARN

Community Manager

The above was originally posted in the Community Help Track. On Tue Jun 4 20:09 UTC 2019, a member of the HCC moderation staff moved it to the Cloud & Operations track. The Community Help Track is intended for questions about using the HCC site itself.

Re: YARN Config Groups settings for yarn scheduler configurations are not honored by YARN

New Contributor

Re: YARN Config Groups settings for yarn scheduler configurations are not honored by YARN

New Contributor

Hi @Artem Ervits @rnettleton Can you please advice if the blueprint custom-configs are in right place ? Or Am I running into an Ambari config issue ?

Re: YARN Config Groups settings for yarn scheduler configurations are not honored by YARN

New Contributor

I looked at Cloudbreak and Ambari Code.

Ambari code has a shortcoming or bug that does not pick up the config group properties while upscaling.

Whereas the config works fine during a) upscaling a node, b) manual change/addition of another property for the config group and restart services in the host.

Working on a patch to fix the issue and will share the patch along with a PR upstream.