Support Questions

Find answers, ask questions, and share your expertise

Yarn Resource restriction in cloudera

avatar
Explorer

In our cluster we did not set any Static pool allocations for Yarn/Hbase/Impala/Hdfs and still Yarn is restricted at certain amount of memory/vcores. do we know why it cannot take as much as resources needed for the containers and not pend them?

 

1 ACCEPTED SOLUTION

avatar
Champion

@VincentSF

 

oh ok got it... 

 

Go to CM -> Yarn -> Configuration -> search for "yarn.nodemanager.resource.memory-mb"

 

it will show you the memory restriction that you set for each node (it will get configuration from yarn-site.xml)

 

You can tweak this 'little'. 

 

Note:

1. The memory is common for all the services. so you cannot use all the memory for Yarn alone. Also don't increase the memroy for the above setting too much because it may create memory overlap issue accross the services. So may be you can set aprox 50% of total memory but again it is depends upon the memory utilization by other services. Since you have 183 nodes, the 50% is not common for all the nodes, it will change case by case

 

2. Also when you increase your memory on each node, it is not recommended to increase more than 

yarn.scheduler.maximum-allocation-mb

 

Hope this will give some idea

 

View solution in original post

9 REPLIES 9

avatar
Champion

Did you get a chance to checkout these paremeters to see how it has been configured in your cluster 

 

yarn.nodemanager.resource.memory-mb	
yarn.nodemanager.pmem-check-enabled	
yarn.nodemanager.vmem-pmem-ratio
yarn.nodemanager.resource.cpu-vcores
yarn.scheduler.minimum-allocation-vcores

 

avatar
Explorer

yarn.nodemanager.resource.memory-mb= 96Gb

yarn.nodemanager.pmem-check-enabled= dont see this in configuration or yarn-site.xml

yarn.nodemanager.vmem-pmem-ratio=dont see this in configuration or yarn-site.xml

yarn.nodemanager.resource.cpu-vcores=24

yarn.scheduler.minimum-allocation-vcores=1

avatar
Champion

@VincentSF

 

After you configure roles/services to each node, based on your node capacity it will allocate resouces to each node. To get your current allocation of CPU, memory etc, go to CM -> each host one by one -> Resources

avatar
Explorer

I go the all the number infront of me , however how much YARN gets is maintained in the static pools which is not set in our cluster. 

Yarn still restricts itself to certain fixed resources .Our cluster has 48 TB of resources from data nodes and Yarn is restricting itself at 18 tb even without any static pool config

avatar
Champion

 

@VincentSF

 

What do you mean by "Yarn is restricting itself at 18 tb" ? I hope you are referring to disk space...

 

Is your problem related to disk space or memory or something else?

 

 

avatar
Explorer

Yarn is not going beyond 18TB Memory (RAM) without any % set on the static pools

avatar
Champion

@VincentSF

 

Is it a typo? are you using TB by mistake instead of GB? I never heard a RAM capacity with 18 TBs 

 

I am asking this because you are repeatedly using Terabyte (TB).. am I missing something?

 

 

avatar
Explorer
We have a huge cluster with 256 GB on each data node and 184 data nodes in the cluster.
Yes it is Terabytes and it is not a typo

avatar
Champion

@VincentSF

 

oh ok got it... 

 

Go to CM -> Yarn -> Configuration -> search for "yarn.nodemanager.resource.memory-mb"

 

it will show you the memory restriction that you set for each node (it will get configuration from yarn-site.xml)

 

You can tweak this 'little'. 

 

Note:

1. The memory is common for all the services. so you cannot use all the memory for Yarn alone. Also don't increase the memroy for the above setting too much because it may create memory overlap issue accross the services. So may be you can set aprox 50% of total memory but again it is depends upon the memory utilization by other services. Since you have 183 nodes, the 50% is not common for all the nodes, it will change case by case

 

2. Also when you increase your memory on each node, it is not recommended to increase more than 

yarn.scheduler.maximum-allocation-mb

 

Hope this will give some idea