Reply
Expert Contributor
Posts: 82
Registered: ‎02-24-2016

Container killed on request, while running mapred jobs

I am running benchmark tearasort program, mapper phase is going well. But when it comes to reducer phase i am getting below info. Ultimately job is failing

16/05/20 14:43:00 INFO mapreduce.Job: Task Id : attempt_1463557283514_0017_r_000006_2, Status : FAILED
Container [pid=63321,containerID=container_1463557283514_0017_01_023529] is running beyond physical memory limits. Current usage: 4.2 GB of 4 GB physical memory used; 8.1 GB of 8.4 GB virtual memory used. Killing container.
Dump of the process-tree for container_1463557283514_0017_01_023529 :
        |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
        |- 63321 63319 63321 63321 (bash) 0 0 115843072 361 /bin/bash -c /usr/java/jdk1.8.0_60/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN  -Xmx6144m -Djava.io.tmpdir=/u05/yarn/nm/usercache/ksm8kor/appcache/application_1463557283514_0017/container_1463557283514_0017_01_023529/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/u03/yarn/container-logs/application_1463557283514_0017/container_1463557283514_0017_01_023529 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.47.35.94 47528 attempt_1463557283514_0017_r_000006_2 23529 1>/u03/yarn/container-logs/application_1463557283514_0017/container_1463557283514_0017_01_023529/stdout 2>/u03/yarn/container-logs/application_1463557283514_0017/container_1463557283514_0017_01_023529/stderr
        |- 63326 63321 63321 63321 (java) 2909 2706 8574074880 1089764 /usr/java/jdk1.8.0_60/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx6144m -Djava.io.tmpdir=/u05/yarn/nm/usercache/ksm8kor/appcache/application_1463557283514_0017/container_1463557283514_0017_01_023529/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/u03/yarn/container-logs/application_1463557283514_0017/container_1463557283514_0017_01_023529 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.47.35.94 47528 attempt_1463557283514_0017_r_000006_2 23529

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Cloudera Employee
Posts: 55
Registered: ‎03-07-2016

Re: Container killed on request, while running mapred jobs

Your reducer is consuming more memory than what your job has requested for your reducer.

1) You can try to increase the amount of memory your job requests for your reducer.

2) If the reducer continues to be killed after you've increased the memory, because it is running out of the allowed memory, you may want to investigate your reducer, see what is happening there. Maybe there is a memory leak.

Cloudera Employee
Posts: 55
Registered: ‎03-07-2016

Re: Container killed on request, while running mapred jobs

Sorry, forgot that you were running terasort, so mostly likely it just needs more memory. My second point is invalid.

Expert Contributor
Posts: 82
Registered: ‎02-24-2016

Re: Container killed on request, while running mapred jobs

Thanks for the reply. I fixed the issue by doing below

Previously whatever i do, if i increase map, reducer, resource manager and container memories its using more than what i configured. Then finally i changed to cloudera default java. I was using 1.8.0_60. I have set this explicitly inside Manager. So when i changed this parameter as well as other changed parameters to CM default settings, it started working
Expert Contributor
Posts: 82
Registered: ‎02-24-2016

Re: Container killed on request, while running mapred jobs

Still i didn't understand why Java 1.8.0_60, is asking for more memory. According to the Official Docs it should support. Could you @haibochen please explain
Cloudera Employee
Posts: 55
Registered: ‎03-07-2016

Re: Container killed on request, while running mapred jobs

As you mentioned, you have a few other config changes as well. It could have been a few things together that have caused the issue. You can try to just change the java version to 1.8.0_60, see if the memory requirement increases. As with why java 1.8.0 needs more memory, I am afraid I am not of much help.

Expert Contributor
Posts: 82
Registered: ‎02-24-2016

Re: Container killed on request, while running mapred jobs

[ Edited ]

I changed these parameters

mapreduce_map_memory_mb 2GB
mapreduce_map_java_opts -Xmx1800M
mapreduce.reduce.java.opts -Xmx6144m
mapreduce_reduce_memory_mb 4 GB
mapreduce.task.timeout 10

. But none of those worked for me. Again i changed to default including JAVA_HOME

Highlighted
New Contributor
Posts: 3
Registered: ‎10-10-2018

Re: Container killed on request, while running mapred jobs

Hi,

 

Im facing same like your issue.

 

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

 

Container [pid=22442,containerID=container_1512747150092_15676_01_000003] is running beyond physical memory limits. Current usage: 1.1 GB of 1 GB physical memory used; 11.1 GB of 2.1 GB virtual memory used. Killing container.

 

This problem occures when im trying to  import data sqoop through in hive like blow Q

sqoop import   --connect jdbc:sqlserver://.server IP. --username <USERNANE> --password <PASSWORD> --query "Select * from TABLE NAME where XYZ \$CONDITIONS"  --target-dir /XYZ --null-string '\\N' --null-non-string '\\N' --hive-delims-replacement '\0D' --fields-terminated-by "\001" --hive-import --hive-table TABLE NAME

-m 1

 

After changing the YARN configuration 

yarn.nodemanager.resource.memory-mb >= yarn.scheduler.maximum-allocation-mb 

 

issue as itis.because of memory utilization of memory on conatiner is full. To process on container we need  memory from that utilized memory then Im passing -D mapreduce.map.memory.mb=2024  parameter in above query its works smoothly without any issue in my case

 

Thanks.

 

 

Announcements