Created 02-28-2017 08:36 PM
I am newbie to Cloudera Hadoop . Working on Mapreduce program.
I would like to increase the datanode memory , nodemanager memory ,
Should I have to increase in the hadoop-env.sh and Yarn-site.xml in the below tags
I have been allocated around 128GB of RAM in my slave nodes.
So is there any base line to start with - more of a crap shot. My program is indeed a heavy computation.
if I want to increase the datanode memory should i have to the put the numbers inside HADOOP_DATANODE_OPTS ? or is there any other tags ? please help me on this.
yarn.nodemanager.resource.memory-mb yarn.app.mapreduce.am.resource.mb mapreduce.map.memory.mb mapreduce.reduce.memory.mb
any information is highly appreciable.
Thanks
Created 02-28-2017 09:12 PM
Created 02-28-2017 10:20 PM
Thanks for the response . Read your thread . In that post everything is pertain to Mapreduce .
we have 128 GB allocated for Slave nodes.
Could you please let me know what should go inside the base number to start with.
yarn.nodemanager.resource.memory-mb
we have 2 hexa cores runining . please let me know if this correct
yarn.nodemanager.resource.cpu-vcores = 12 yarn.scheduler.minimum-allocation-vcores = 1 yarn-scheduler.maximum-allocation-vcores = 10
what should go inside
yarn.scheduler.minimum-allocation-mb yarn.scheduler.maximum-allocation-mb
Created 03-01-2017 12:29 PM
Created 03-01-2017 01:09 PM
Here is one of our Community Knowledge articles that may also be of assistance when calculating memory size.
Selecting the Right Hardware for Your New Hadoop Cluster