Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask

avatar
New Contributor
[boco@cloud92 ~]$ hive
14/08/22 15:56:39 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
14/08/22 15:56:39 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
14/08/22 15:56:39 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
14/08/22 15:56:39 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
14/08/22 15:56:39 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
14/08/22 15:56:39 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
14/08/22 15:56:39 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
14/08/22 15:56:40 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.

Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hive/lib/hive-common-0.12.0-cdh5.1.0.jar!/hive-log4j.properties
hive> use datamine;
OK
Time taken: 0.687 seconds
hive> SELECT initid_alarmid1, 
    >             COUNT (*) AS ab_num
    >        FROM tab5_11
    >    GROUP BY initid_alarmid1;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
Starting Job = job_1408689319506_0036, Tracking URL = http://cloud61:8088/proxy/application_1408689319506_0036/
Kill Command = /opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/bin/hadoop job  -kill job_1408689319506_0036
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2014-08-22 15:57:10,729 Stage-1 map = 0%,  reduce = 0%
2014-08-22 15:57:13,902 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 4.48 sec
2014-08-22 15:57:14,949 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 4.48 sec
MapReduce Total cumulative CPU time: 4 seconds 480 msec
Ended Job = job_1408689319506_0036 with errors
Error during job, obtaining debugging information...
Examining task ID: task_1408689319506_0036_m_000000 (and more) from job job_1408689319506_0036

Task with the most failures(1): 
-----
Task ID:
  task_1408689319506_0036_r_000000

URL:
  http://cloud61:8088/taskdetails.jsp?jobid=job_1408689319506_0036&tipid=task_1408689319506_0036_r_000000
-----
Diagnostic Messages for this Task:
java.lang.RuntimeException: Error in configuring object
        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)
        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
        at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:409)
        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
        at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.runSubtask(LocalContainerLauncher.java:399)
        at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.runTask(LocalContainerLauncher.java:290)
        at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.access$200(LocalContainerLauncher.java:178)
        at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler$1.run(LocalContainerLauncher.java:219)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106)
        ... 13 more
Caused by: java.lang.NullPointerException
        at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.configure(ExecReducer.java:116)
        ... 18 more


FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched: 
Job 0: Map: 1  Reduce: 1   Cumulative CPU: 4.48 sec   HDFS Read: 111504306 HDFS Write: 179018 FAIL
Total MapReduce CPU Time Spent: 4 seconds 480 msec

 

 would anyone can help me? thank you

7 REPLIES 7

avatar
Explorer
Could you pull the syserr log out from the Hue web interface and go to job tracker look at the logs there of the failed task.

I've notice the console logs are not very detailed in their error message

avatar
New Contributor

Thank you for your reply !   Detailed as follow:

 Log Type: stderr

Log Length: 242

log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapred.TaskAttemptListenerImpl).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.


Log Type: stdout

Log Length: 0


Log Type: syslog

Log Length: 339651

Showing 4096 bytes of 339651 total. Click here for the full log.

 Size of the outstanding queue size is 0
2014-09-01 00:45:18,444 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying hdfs://cloud60:8020/user/boco/.staging/job_1409203388885_0011/job_1409203388885_0011_1.jhist to hdfs://cloud60:8020/user/history/done_intermediate/boco/job_1409203388885_0011-1409503505152-boco-create+table+tab5_16+as%0Ase...initid_alarmid2%28Stage-1409503518377-3-0-FAILED-root.boco-1409503515788.jhist_tmp
2014-09-01 00:45:18,484 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to done location: hdfs://cloud60:8020/user/history/done_intermediate/boco/job_1409203388885_0011-1409503505152-boco-create+table+tab5_16+as%0Ase...initid_alarmid2%28Stage-1409503518377-3-0-FAILED-root.boco-1409503515788.jhist_tmp
2014-09-01 00:45:18,491 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying hdfs://cloud60:8020/user/boco/.staging/job_1409203388885_0011/job_1409203388885_0011_1_conf.xml to hdfs://cloud60:8020/user/history/done_intermediate/boco/job_1409203388885_0011_conf.xml_tmp
2014-09-01 00:45:18,536 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to done location: hdfs://cloud60:8020/user/history/done_intermediate/boco/job_1409203388885_0011_conf.xml_tmp
2014-09-01 00:45:18,549 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://cloud60:8020/user/history/done_intermediate/boco/job_1409203388885_0011.summary_tmp to hdfs://cloud60:8020/user/history/done_intermediate/boco/job_1409203388885_0011.summary
2014-09-01 00:45:18,555 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://cloud60:8020/user/history/done_intermediate/boco/job_1409203388885_0011_conf.xml_tmp to hdfs://cloud60:8020/user/history/done_intermediate/boco/job_1409203388885_0011_conf.xml
2014-09-01 00:45:18,559 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://cloud60:8020/user/history/done_intermediate/boco/job_1409203388885_0011-1409503505152-boco-create+table+tab5_16+as%0Ase...initid_alarmid2%28Stage-1409503518377-3-0-FAILED-root.boco-1409503515788.jhist_tmp to hdfs://cloud60:8020/user/history/done_intermediate/boco/job_1409203388885_0011-1409503505152-boco-create+table+tab5_16+as%0Ase...initid_alarmid2%28Stage-1409503518377-3-0-FAILED-root.boco-1409503515788.jhist
2014-09-01 00:45:18,561 INFO [Thread-75] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped JobHistoryEventHandler. super.stop()
2014-09-01 00:45:18,562 ERROR [uber-EventHandler] org.apache.hadoop.mapred.LocalContainerLauncher: Returning, interrupted : java.lang.InterruptedException
2014-09-01 00:45:18,563 INFO [Thread-75] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job diagnostics to Task failed task_1409203388885_0011_r_000000
Job failed as tasks failed. failedMaps:0 failedReduces:1

2014-09-01 00:45:18,563 INFO [Thread-75] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: History url is http://cloud60:19888/jobhistory/job/job_1409203388885_0011
2014-09-01 00:45:18,570 INFO [Thread-75] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Waiting for application to be successfully unregistered.
2014-09-01 00:45:19,572 INFO [Thread-75] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging directory hdfs://cloud60:8020 /user/boco/.staging/job_1409203388885_0011
2014-09-01 00:45:19,581 INFO [Thread-75] org.apache.hadoop.ipc.Server: Stopping server on 58400
2014-09-01 00:45:19,582 INFO [IPC Server listener on 58400] org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 58400
2014-09-01 00:45:19,583 INFO [TaskHeartbeatHandler PingChecker] org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler: TaskHeartbeatHandler thread interrupted
2014-09-01 00:45:19,586 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: Stopping IPC Server Responder

 stderr.jpg

avatar
New Contributor

Did you get any resolution for this...

avatar
New Contributor

Greetings

 

If by chance u are still looking to resolve  a return code 2 error while tunning hive, I may have a solution for u if u dont get any information from the log files.  Return code 2 is basically a camoflauge for an hadoop/yarn memory problem.  Basically, not enough resources configured into hadoop/yarn to run your projects   If u are running a single-node cluster ..see the link below

 

http://stackoverflow.com/questions/26540507/what-is-the-maximum-containers-in-a-single-node-cluster-...

 

U may be able to tweak the settings depending on your cluster setup.  If this does not cure your problem 100%, then at least the return code 2 or exit code 1 errors would disappear.   Hope this helps

 

avatar
Rising Star

Hi Folks,

 

I am also getting a similar error. I tried increasing the minimum memory allocation for the container, but it didn't help. Does anyone have any other suggestions? Here is a snippet from the syslogs:

 

2016-07-25 15:11:12,181 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://nameservice1:8020]
2016-07-25 15:11:12,379 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapred.JobConf: Task java-opts do not specify heap size. Setting task attempt jvm max heap size to -Xmx820m
2016-07-25 15:11:12,381 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1469459226704_0003_m_000000_0 TaskAttempt Transitioned from UNASSIGNED to ASSIGNED
2016-07-25 15:11:12,381 INFO [uber-EventHandler] org.apache.hadoop.mapred.LocalContainerLauncher: Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container container_e92_1469459226704_0003_01_000001 taskAttempt attempt_1469459226704_0003_m_000000_0
2016-07-25 15:11:12,383 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt: [attempt_1469459226704_0003_m_000000_0] using containerId: [container_e92_1469459226704_0003_01_000001 on NM: [usstlz-pinfap27.emrsn.org:8041]
2016-07-25 15:11:12,383 INFO [uber-SubtaskRunner] org.apache.hadoop.mapred.LocalContainerLauncher: mapreduce.cluster.local.dir for uber task: /grid/disk01/yarn/nm/usercache/mgarg/appcache/application_1469459226704_0003,/grid/disk02/yarn/nm/usercache/mgarg/appcache/application_1469459226704_0003,/grid/disk03/yarn/nm/usercache/mgarg/appcache/application_1469459226704_0003,/grid/disk04/yarn/nm/usercache/mgarg/appcache/application_1469459226704_0003,/grid/disk05/yarn/nm/usercache/mgarg/appcache/application_1469459226704_0003,/grid/disk06/yarn/nm/usercache/mgarg/appcache/application_1469459226704_0003,/grid/disk07/yarn/nm/usercache/mgarg/appcache/application_1469459226704_0003,/grid/disk08/yarn/nm/usercache/mgarg/appcache/application_1469459226704_0003,/grid/disk09/yarn/nm/usercache/mgarg/appcache/application_1469459226704_0003,/grid/disk10/yarn/nm/usercache/mgarg/appcache/application_1469459226704_0003,/grid/disk11/yarn/nm/usercache/mgarg/appcache/application_1469459226704_0003,/grid/disk12/yarn/nm/usercache/mgarg/appcache/application_1469459226704_0003
2016-07-25 15:11:12,386 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1469459226704_0003_m_000000_0 TaskAttempt Transitioned from ASSIGNED to RUNNING
2016-07-25 15:11:12,386 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1469459226704_0003_m_000000 Task Transitioned from SCHEDULED to RUNNING
2016-07-25 15:11:12,386 INFO [uber-SubtaskRunner] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output Committer Algorithm version is 1
2016-07-25 15:11:12,394 INFO [uber-SubtaskRunner] org.apache.hadoop.mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
2016-07-25 15:11:12,472 INFO [uber-SubtaskRunner] org.apache.hadoop.mapred.MapTask: Processing split: org.apache.oozie.action.hadoop.OozieLauncherInputFormat$EmptySplit@26739de1
2016-07-25 15:11:12,477 INFO [uber-SubtaskRunner] org.apache.hadoop.mapred.MapTask: numReduceTasks: 0
2016-07-25 15:11:12,487 INFO [uber-SubtaskRunner] org.apache.hadoop.conf.Configuration.deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
2016-07-25 15:11:12,614 INFO [uber-SubtaskRunner] org.apache.hadoop.conf.Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
2016-07-25 15:11:12,952 INFO [uber-SubtaskRunner] org.apache.hive.jdbc.Utils: Supplied authorities: usstlz-pinfap22.emrsn.org:10000
2016-07-25 15:11:12,953 INFO [uber-SubtaskRunner] org.apache.hive.jdbc.Utils: Resolved authority: usstlz-pinfap22.emrsn.org:10000
2016-07-25 15:11:18,531 INFO [communication thread] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1469459226704_0003_m_000000_0 is : 1.0
2016-07-25 15:11:45,624 INFO [communication thread] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1469459226704_0003_m_000000_0 is : 1.0
2016-07-25 15:12:15,702 INFO [communication thread] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1469459226704_0003_m_000000_0 is : 1.0
2016-07-25 15:12:45,786 INFO [communication thread] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1469459226704_0003_m_000000_0 is : 1.0
2016-07-25 15:13:15,843 INFO [communication thread] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1469459226704_0003_m_000000_0 is : 1.0
2016-07-25 15:13:35,339 INFO [Socket Reader #1 for port 46987] SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for mgarg (auth:SIMPLE)
2016-07-25 15:13:35,348 INFO [Socket Reader #1 for port 46987] SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for mgarg (auth:TOKEN) for protocol=interface org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB
2016-07-25 15:13:35,442 INFO [IPC Server handler 0 on 46987] org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Kill job job_1469459226704_0003 received from mgarg (auth:TOKEN) at 10.16.148.79
2016-07-25 15:13:35,443 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1469459226704_0003Job Transitioned from RUNNING to KILL_WAIT
2016-07-25 15:13:35,443 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1469459226704_0003_m_000000 Task Transitioned from RUNNING to KILL_WAIT
2016-07-25 15:13:35,443 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1469459226704_0003_m_000000_0 TaskAttempt Transitioned from RUNNING to KILL_CONTAINER_CLEANUP
2016-07-25 15:13:35,444 INFO [uber-EventHandler] org.apache.hadoop.mapred.LocalContainerLauncher: Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container container_e92_1469459226704_0003_01_000001 taskAttempt attempt_1469459226704_0003_m_000000_0
2016-07-25 15:13:35,444 INFO [uber-EventHandler] org.apache.hadoop.mapred.LocalContainerLauncher: canceling the task attempt attempt_1469459226704_0003_m_000000_0
2016-07-25 15:13:35,445 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1469459226704_0003_m_000000_0 TaskAttempt Transitioned from KILL_CONTAINER_CLEANUP to KILL_TASK_CLEANUP
2016-07-25 15:13:35,445 INFO [CommitterEvent Processor #1] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: TASK_ABORT
2016-07-25 15:13:35,510 WARN [CommitterEvent Processor #1] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not delete hdfs://nameservice1/user/mgarg/oozie-oozi/0000004-160722211318443-oozie-oozi-W/hive2-b70b--hive2/output/_temporary/1/_temporary/attempt_1469459226704_0003_m_000000_0
2016-07-25 15:13:35,513 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1469459226704_0003_m_000000_0 TaskAttempt Transitioned from KILL_TASK_CLEANUP to KILLED
2016-07-25 15:13:35,521 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1469459226704_0003_m_000000 Task Transitioned from KILL_WAIT to KILLED
2016-07-25 15:13:35,522 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 1
2016-07-25 15:13:35,523 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1469459226704_0003Job Transitioned from KILL_WAIT to KILL_ABORT
2016-07-25 15:13:35,524 INFO [CommitterEvent Processor #2] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: JOB_ABORT
2016-07-25 15:13:35,530 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1469459226704_0003Job Transitioned from KILL_ABORT to KILLED
2016-07-25 15:13:35,531 INFO [Thread-88] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing cleanly so this is the last retry
2016-07-25 15:13:35,531 INFO [Thread-88] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator isAMLastRetry: true
2016-07-25 15:13:35,531 INFO [Thread-88] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: RMCommunicator notified that shouldUnregistered is: true
2016-07-25 15:13:35,531 INFO [Thread-88] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH isAMLastRetry: true
2016-07-25 15:13:35,531 INFO [Thread-88] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: JobHistoryEventHandler notified that forceJobCompletion is true
2016-07-25 15:13:35,531 INFO [Thread-88] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all the services
2016-07-25 15:13:35,532 INFO [Thread-88] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping JobHistoryEventHandler. Size of the outstanding queue size is 2
2016-07-25 15:13:35,533 INFO [Thread-88] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In stop, writing event TASK_FAILED
2016-07-25 15:13:35,536 INFO [Thread-88] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In stop, writing event JOB_KILLED
2016-07-25 15:13:35,561 INFO [Thread-88] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying hdfs://nameservice1:8020/user/mgarg/.staging/job_1469459226704_0003/job_1469459226704_0003_1.jhist to hdfs://nameservice1:8020/user/history/done_intermediate/mgarg/job_1469459226704_0003-1469459466974-mgarg-oozie%3Alauncher%3AT%3Dhive2%3AW%3Ddev%2Dcorp%2Dhrdh%2Drec%2Dlanding-1469459615523-0-0-KILLED-root.corp.hrdh-1469459471890.jhist_tmp
2016-07-25 15:13:35,578 INFO [Thread-88] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to done location: hdfs://nameservice1:8020/user/history/done_intermediate/mgarg/job_1469459226704_0003-1469459466974-mgarg-oozie%3Alauncher%3AT%3Dhive2%3AW%3Ddev%2Dcorp%2Dhrdh%2Drec%2Dlanding-1469459615523-0-0-KILLED-root.corp.hrdh-1469459471890.jhist_tmp
2016-07-25 15:13:35,579 INFO [Thread-88] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying hdfs://nameservice1:8020/user/mgarg/.staging/job_1469459226704_0003/job_1469459226704_0003_1_conf.xml to hdfs://nameservice1:8020/user/history/done_intermediate/mgarg/job_1469459226704_0003_conf.xml_tmp
2016-07-25 15:13:35,595 INFO [Thread-88] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to done location: hdfs://nameservice1:8020/user/history/done_intermediate/mgarg/job_1469459226704_0003_conf.xml_tmp
2016-07-25 15:13:35,601 INFO [Thread-88] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://nameservice1:8020/user/history/done_intermediate/mgarg/job_1469459226704_0003.summary_tmp to hdfs://nameservice1:8020/user/history/done_intermediate/mgarg/job_1469459226704_0003.summary
2016-07-25 15:13:35,603 INFO [Thread-88] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://nameservice1:8020/user/history/done_intermediate/mgarg/job_1469459226704_0003_conf.xml_tmp to hdfs://nameservice1:8020/user/history/done_intermediate/mgarg/job_1469459226704_0003_conf.xml
2016-07-25 15:13:35,605 INFO [Thread-88] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://nameservice1:8020/user/history/done_intermediate/mgarg/job_1469459226704_0003-1469459466974-mgarg-oozie%3Alauncher%3AT%3Dhive2%3AW%3Ddev%2Dcorp%2Dhrdh%2Drec%2Dlanding-1469459615523-0-0-KILLED-root.corp.hrdh-1469459471890.jhist_tmp to hdfs://nameservice1:8020/user/history/done_intermediate/mgarg/job_1469459226704_0003-1469459466974-mgarg-oozie%3Alauncher%3AT%3Dhive2%3AW%3Ddev%2Dcorp%2Dhrdh%2Drec%2Dlanding-1469459615523-0-0-KILLED-root.corp.hrdh-1469459471890.jhist
2016-07-25 15:13:35,605 INFO [Thread-88] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped JobHistoryEventHandler. super.stop()
2016-07-25 15:13:35,605 ERROR [uber-EventHandler] org.apache.hadoop.mapred.LocalContainerLauncher: Returning, interrupted : java.lang.InterruptedException
2016-07-25 15:13:35,606 INFO [Thread-88] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job diagnostics to Kill job job_1469459226704_0003 received from mgarg (auth:TOKEN) at 10.16.148.79
Job received Kill while in RUNNING state.

Thanks!

 

avatar
New Contributor

I am facing the same issue. Do anyone have the resolution for the issue?

avatar
Explorer

We got the same issue and resolve as below:

Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask

Check for more details and got error as : "Unexpected end of input stream"

 

Now, Get the hdfs LOCATION for the table by using below command on HUE or HIVE shell:

show create table <table-name>;

 

Check for the zero byte size files and remove them from hdfs location using below command:

hdfs dfs -rm -skipTrash $(hdfs dfs -ls -R <hdfs_location> | grep -v "^d" | awk '{if ($5 == 0) print $8}')

 

Try running again our query which ran successfully this time.