Member since
06-22-2017
5
Posts
0
Kudos Received
0
Solutions
06-22-2017
09:22 PM
We have a huge cluster with 256 GB on each data node and 184 data nodes in the cluster. Yes it is Terabytes and it is not a typo
... View more
06-22-2017
01:02 PM
Yarn is not going beyond 18TB Memory (RAM) without any % set on the static pools
... View more
06-22-2017
11:31 AM
I go the all the number infront of me , however how much YARN gets is maintained in the static pools which is not set in our cluster. Yarn still restricts itself to certain fixed resources .Our cluster has 48 TB of resources from data nodes and Yarn is restricting itself at 18 tb even without any static pool config
... View more
06-22-2017
10:56 AM
yarn.nodemanager.resource.memory-mb= 96Gb yarn.nodemanager.pmem-check-enabled= dont see this in configuration or yarn-site.xml yarn.nodemanager.vmem-pmem-ratio=dont see this in configuration or yarn-site.xml yarn.nodemanager.resource.cpu-vcores=24 yarn.scheduler.minimum-allocation-vcores=1
... View more
06-22-2017
09:08 AM
In our cluster we did not set any Static pool allocations for Yarn/Hbase/Impala/Hdfs and still Yarn is restricted at certain amount of memory/vcores. do we know why it cannot take as much as resources needed for the containers and not pend them?
... View more
Labels:
- Labels:
-
Apache YARN