Created on 06-22-2017 09:08 AM - edited 09-16-2022 04:48 AM
In our cluster we did not set any Static pool allocations for Yarn/Hbase/Impala/Hdfs and still Yarn is restricted at certain amount of memory/vcores. do we know why it cannot take as much as resources needed for the containers and not pend them?
Created 06-23-2017 11:44 AM
oh ok got it...
Go to CM -> Yarn -> Configuration -> search for "yarn.nodemanager.resource.memory-mb"
it will show you the memory restriction that you set for each node (it will get configuration from yarn-site.xml)
You can tweak this 'little'.
Note:
1. The memory is common for all the services. so you cannot use all the memory for Yarn alone. Also don't increase the memroy for the above setting too much because it may create memory overlap issue accross the services. So may be you can set aprox 50% of total memory but again it is depends upon the memory utilization by other services. Since you have 183 nodes, the 50% is not common for all the nodes, it will change case by case
2. Also when you increase your memory on each node, it is not recommended to increase more than
yarn.scheduler.maximum-allocation-mb
Hope this will give some idea
Created 06-22-2017 10:00 AM
Did you get a chance to checkout these paremeters to see how it has been configured in your cluster
yarn.nodemanager.resource.memory-mb yarn.nodemanager.pmem-check-enabled yarn.nodemanager.vmem-pmem-ratio yarn.nodemanager.resource.cpu-vcores yarn.scheduler.minimum-allocation-vcores
Created 06-22-2017 10:56 AM
yarn.nodemanager.resource.memory-mb= 96Gb
yarn.nodemanager.pmem-check-enabled= dont see this in configuration or yarn-site.xml
yarn.nodemanager.vmem-pmem-ratio=dont see this in configuration or yarn-site.xml
yarn.nodemanager.resource.cpu-vcores=24
yarn.scheduler.minimum-allocation-vcores=1
Created 06-22-2017 11:25 AM
After you configure roles/services to each node, based on your node capacity it will allocate resouces to each node. To get your current allocation of CPU, memory etc, go to CM -> each host one by one -> Resources
Created 06-22-2017 11:31 AM
I go the all the number infront of me , however how much YARN gets is maintained in the static pools which is not set in our cluster.
Yarn still restricts itself to certain fixed resources .Our cluster has 48 TB of resources from data nodes and Yarn is restricting itself at 18 tb even without any static pool config
Created 06-22-2017 12:57 PM
What do you mean by "Yarn is restricting itself at 18 tb" ? I hope you are referring to disk space...
Is your problem related to disk space or memory or something else?
Created 06-22-2017 01:02 PM
Yarn is not going beyond 18TB Memory (RAM) without any % set on the static pools
Created 06-22-2017 01:18 PM
Is it a typo? are you using TB by mistake instead of GB? I never heard a RAM capacity with 18 TBs
I am asking this because you are repeatedly using Terabyte (TB).. am I missing something?
Created 06-22-2017 09:22 PM
Created 06-23-2017 11:44 AM
oh ok got it...
Go to CM -> Yarn -> Configuration -> search for "yarn.nodemanager.resource.memory-mb"
it will show you the memory restriction that you set for each node (it will get configuration from yarn-site.xml)
You can tweak this 'little'.
Note:
1. The memory is common for all the services. so you cannot use all the memory for Yarn alone. Also don't increase the memroy for the above setting too much because it may create memory overlap issue accross the services. So may be you can set aprox 50% of total memory but again it is depends upon the memory utilization by other services. Since you have 183 nodes, the 50% is not common for all the nodes, it will change case by case
2. Also when you increase your memory on each node, it is not recommended to increase more than
yarn.scheduler.maximum-allocation-mb
Hope this will give some idea