I have a mapper reduce job failed on out of memory.
Application application_1484466365663_87038 failed 2 times due to AM Container for appattempt_1484466365663_87038_000002 exited with exitCode: -104
Diagnostics: Container [pid=7448,containerID=container_e29_1484466365663_87038_02_000001] is running beyond physical memory limits. Current usage: 3.0 GB of 3 GB physical memory used; 6.6 GB of 6.3 GB virtual memory used. Killing container.
Dump of the process-tree for container_e29_1484466365663_87038_02_000001 :
When i'm checking the memory configured for map task and for Application master in cloudera manager it's 2 GB.
Checked the job configuration in YARN and see it's 2 GB.
mapreduce.map.memory.mb = 2GB
I have 2 question:
1- How i know if this container is the AM container or the mapper container, does the above error indicated the AM memory exceeded?
2- Why it's alerting on 3GB while all my configuration is 2 GB.
The solution is clear for me that i need to increase the memory.
My concern why it's alerting on 3GB of memory and not the mapper memory which is 6GB or the oozie launcher which is 4GB also is it alerting on mapper memory or the application master memory?
The map container memory was set to 4 GB. Presumably the heap value was set to 3 GB (newer versions have a percentage and auto set the heap size of the container and the default percentage is 80%; 3/4 is 75%). The 6 GB comes from virtual memory, which I recommend just disabling as it can cause weird OOM issues. The default virtual memory ration is 2.1 which doesn't come out to 6 from 4. The log even states that the latter is the virtual memory size.
yarn.nodemanager.vmem-check-enabled = false to disable.
How I can disable `yarn.nodemanager.vmem-check-enabled` I try to add to `NodeManager Advanced Configuration Snippet (Safety Valve) for yarn-site.xml` but I don't see it in the yarn-site.xml on the nodes.