Member since
03-20-2017
2
Posts
0
Kudos Received
0
Solutions
03-25-2017
05:09 PM
I see the problem. Second cluster hardware (3 nodes) : 64 logical cores 256Gb RAM 2To HDD As per my maths, total cores = 64*3 = 192, total RAM = 256*3 = 768GB. The NodeManager capacities, yarn.nodemanager.resource.memory-mb and yarn.nodemanager.resource.cpu-vcores ,
should probably be set to 252* 1024 = 158048 (megabytes) and 60
respectively. We avoid allocating 100% of the resources to YARN
containers because the node needs some resources to run the OS and
Hadoop daemons. In this case leave 4 gigabyte and a 4 cores for these
system processes. Check in yarn-site.xml, it could already be set, if not, set yourself. Now, a better option would be to use --num-executors 34 --executor-cores 5 --executor-memory 19G --driver-memory 32G . Why? This config results in three executors on all nodes except for the one with the AM, which will have two executors. --executor-memory was derived as (252/12 executors per node) = 21. 21 * 0.07 = 1.47. 21 – 1.47 ~ 19.
... View more