Support Questions

Find answers, ask questions, and share your expertise

Yarn-site.xml changes not reflecting

avatar
Explorer

We have an application managed by yarn when we change yarn-site.xml those changes are not reflected , application is still running with old configuration. We are new to Yarn any help in this regard will be helpful

 

 

Note : we have already tried restarted yarn using stop-yarn.sh and start-yarn.sh also restared dfs using start-dfs.sh and stop-dfs.sh . We are using hadoop 2.7.3

6 REPLIES 6

avatar
Champion

@srirocky

 

I think you are updating yarn-site.xml via CLI. If you are using Cloudera Manager then I would recommend you to update yarn-zite.xml via Cloudera Manager -> Yarn -> Configuration instead of CLI. Becuase hadoop will maintain yarn-site.xml in many places for many reasons, so updating yarn-site.xml in one (wrong) place  will not be reflected.

 

After you made the above change, CM -> Yarn -> will show Stale configuration, save it and restart Yarn in CM itselft (instead of CLI)

 

Thanks

Kumar

avatar
Explorer

@saranvisa unfortunately we are not using cloudera manager .....we are using apache hadoop 2.7.3 and yarn that comes along with it .......also i made sure yarn-site.xml is updated on all nodes and have same values ... 

 

this is what yarn reflecting 

 

Yarn-ExecutorCount.PNG

 

this is what configured in yarn-site.xml which is configured for 22GB and 7 cores but it's jus using 16GB and 6 cores not sure why 

 

<configuration>

    <!-- Site specific YARN configuration properties -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>

    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>hdfs-name-node</value>
    </property>

    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>22528</value>
    </property>

    <property>
        <name>yarn.nodemanager.resource.cpu-vcores</name>
        <value>7</value>
    </property>

    <property>
        <name>yarn.scheduler.maximum-allocation-mb</name>
        <value>22528</value>
    </property>

    <property>
        <name>yarn.nodemanager.local-dirs</name>
        <value>file:///tmp/hadoop/data/nm-local-dir,file:///tmp/hadoop/data/nm-local-dir/filecache,file:///tmp/hadoop/data/nm-local-dir/usercache</value>
    </property>

    <property>
        <name>yarn.nodemanager.localizer.cache.cleanup.interval-ms</name>
        <value>500</value>
    </property>

    <property>
         <name>yarn.nodemanager.localizer.cache.target-size-mb</name>
         <value>512</value>
    </property>

    </configuration>



 

avatar
Champion

@srirocky

 

The image that you have pasted is not visible (under "this is what yarn reflecting")

 

In the mean time, pls answer the below

1. what is your cluster capacity?

2. Are you following the formulas from the below link to setup Yarn (or) Increasing the size with some random numbers?

 

https://www.cloudera.com/documentation/enterprise/5-3-x/topics/cdh_ig_yarn_tuning.html

 

Thanks

Kumar

avatar
Explorer

Cluster capacity is 

 

 

1 Master/Driver Node : Memory :24GB Cores :8
4 Worker Nodes : Memory :24GB Cores :8

 

Yes we are following the formula as mentioned 

avatar
Champion

@srirocky

 

If your total memory is 24 GB then you should not set your max memory alloacation to 22 GB. Because when you run any job, it may use more than one containers and the below properties that you are setting is per container. 

 

1. As I mentioned above, pls refer the link that i've provided and search for this parameters and you will be noticed that it belongs to containers...

 

yarn.nodemanager.resource.memory-mb
yarn.scheduler.maximum-allocation-mb

 

2. Now go to http://ipaddress:8088, Run a job and check how many "Containers Running". If you are running a small job, it will try to use one container but for bigger jobs, it will be increased. Since you setup 22GB for max memory allocation and when it try to use more than one container, it may end up with unnecessary error (case by case), becuase your total memory itself is 24 GB

 

3. So the bottom line is you cannot increase your min/max memory/core allocation with some random numbers, you need to follow the link that i've provided to calculate

 

By default, you can set the minimum to 1 GB and max to 4 GB (Subject to change)

yarn.scheduler.minimum-allocation-mb
yarn.scheduler.maximum-allocation-mb

 

Since the memory is specific to container, for bigger jobs, it will try to use more container and the corresponding max memroy

 

Hope this will help you!!

 

Thanks

Kumar

 

 

avatar
Champion
saranvisa is correct in that you should set a minimum and the max should not push the a single nodes memory limits as a single container cannot run across nodes.

There is still the mismatch in what is in the configs versus what YARN is using and reporting.

On the RM machine get the process id for the RM, sudo su yarn -c "jps" and then get the process info for that id, ps -ef | grep <id>.

Does that show that ti is using the configs from the path that you changed, it should be listed in -classpath?