Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Container is running beyond memory limits

avatar
New Contributor

In Hadoop v2,HDP2.3 stack, in single node cluster My machine has 2 cores and 14G memory out of which 5.6 G free . Now with YARN, when run the stream job application with 400kb input file on the same machine, I got container error.

Container [pid=58662,containerID=container_e03_1474273534378_0008_01_000002] is running beyond physical memory limits. Current usage: 2.6 GB of 2.5 GB physical memory used; 4.1 GB of 5.3 GB virtual memory used. Killing container. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143

When i used the same script of input file 245 KB, it got succeeded. But failing for the the input file of size 400 kb and above.

By default, I have this settings:

yarn-site.xml

-----------------

<property>

<name>yarn.scheduler.maximum-allocation-mb</name>

<value>10240</value>

</property>

<property>

<name>yarn.scheduler.maximum-allocation-vcores</name>

<value>1</value>

</property>

<property> <name>yarn.scheduler.minimum-allocation-mb</name> <value>2560</value> </property>

<property> <name>yarn.scheduler.minimum-allocation-vcores</name> <value>1</value> </property>

<property> <name>yarn.nodemanager.resource.memory-mb</name> <value>10240</value> </property>

mapred-site.xml

---------------------------

<property> <name>mapreduce.map.java.opts</name> <value>-Xmx2048m</value> </property> <property> <name>mapreduce.map.memory.mb</name> <value>2560</value> </property>

<property> <name>mapreduce.reduce.java.opts</name> <value>-Xmx4096m</value> </property> <property> <name>mapreduce.reduce.memory.mb</name> <value>5120</value> </property>

kindly help on this.

4 REPLIES 4

avatar
Rising Star

@muthyalapaa,

You can solve it by increasing the value of "mapreduce.map.memory.mb". Dont know this will solve your problem or not for sure.

avatar
New Contributor

@Nitin Shelke thanks for the help , increasing value of "mapreduce.map.memory.mb to 2.75 GB from 2.5Gb worked

but just for 400kb file whether 2.7GB really needed?. because of this current job the other parallel jobs getting delayed seems they are queued till current job to complete as this current job taking 2.75GB out of 5 GB available.

is there any way to fine tune this scenario?

avatar
Rising Star

@Muthyalapaa, You can follow this link for tunning YARN

http://crazyadmins.com/tag/tuning-yarn-to-get-maximum-performance/

avatar
New Contributor

Thank you for the tutorial.In my multinode hadoop installation everything is working fine except when I run start-yarn.sh.My resource manager started on master node correctly but on name nodes I am not able to see any nodemanager. When I checked my log files I saw the following errors "NodeManager doesn't satisfy minimum allocations".Similary I have also seen this in my log file "Initialized nodemanager for null: physical-memory=-1 virtual-memory=-2 virtual-cores=-1" I have no idea why its initializing with these values

can you please help me to figure out this issue.?