Recently I deployed 4 node cluster with CDH 5.7.1 and hardware details are as follows :
RAM - 16Gb , Hard disk - 500Gb
Can someone tell me how to allocate resources to services, mostly I am using hive, spark and Hbase (I/O intense jobs).
Is there any pre-defined calculations on how to split resources ? Please share.
Just FyI : I saw cloudera performance tuning sheet and it didn't help me much
I think static service pools what you are looking for. This documentation can help you:
Thanks for the reply, I will go through this link. To make my request simply. I want to assgin values for Yarn parameters like :
yarn.scheduler.maximum-allocation-mb, yarn.nodemanager.resource.memory-mb, yarn.app.mapreduce.am.resource.cpu-vcores , yarn.app.mapreduce.am.resource.mb, yarn.app.mapreduce.am.resource.mb, etc ..
One more thing, I am confused about parameter Heap to Container Size Ratio (in my case it is 0.8).