I am newbie to Cloudera Hadoop . Working on Mapreduce program.
I would like to increase the datanode memory , nodemanager memory ,
Should I have to increase in the hadoop-env.sh and Yarn-site.xml in the below tags
I have been allocated around 128GB of RAM in my slave nodes.
So is there any base line to start with - more of a crap shot. My program is indeed a heavy computation.
if I want to increase the datanode memory should i have to the put the numbers inside HADOOP_DATANODE_OPTS ? or is there any other tags ? please help me on this.
yarn.nodemanager.resource.memory-mb yarn.app.mapreduce.am.resource.mb mapreduce.map.memory.mb mapreduce.reduce.memory.mb
any information is highly appreciable.
Thanks for the response . Read your thread . In that post everything is pertain to Mapreduce .
we have 128 GB allocated for Slave nodes.
Could you please let me know what should go inside the base number to start with.
we have 2 hexa cores runining . please let me know if this correct
yarn.nodemanager.resource.cpu-vcores = 12 yarn.scheduler.minimum-allocation-vcores = 1 yarn-scheduler.maximum-allocation-vcores = 10
what should go inside
Here is one of our Community Knowledge articles that may also be of assistance when calculating memory size.