Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

oozie fails in executing hive command

oozie fails in executing hive command

New Contributor
Starting Job = job_1466616907961_1324, Tracking URL = http://asc.com:8080/proxy/application_1466616907961_1324/
Kill Command = /usr/hdp/2.3.4.0-3485/hadoop/bin/hadoop job  -kill job_1466616907961_1324
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2016-07-11 23:47:39,901 Stage-1 map = 0%,  reduce = 0%
Ended Job = job_1466616907961_1324 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched: 
Stage-Stage-1:  HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec 

Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.ShellMain], exit code [1]

3 REPLIES 3

Re: oozie fails in executing hive command

New Contributor

Also in the yarn i find the following error :

Job init failed : org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.io.FileNotFoundException: File does not exist: hdfs://asc/user/hdfs/.staging/job_1466616907961_1330/job.splitmetainfo at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.createSplits(JobImpl.java:1568) at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.transition(JobImpl.java:1432) at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.transition(JobImpl.java:1390) at org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385) at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302) at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:996) at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:138) at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:1346) at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1121) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$5.run(MRAppMaster.java:1557) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1553) at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1486) Caused by: java.io.FileNotFoundException: File does not exist: hdfs://asc/user/hdfs/.staging/job_1466616907961_1330/job.splitmetainfo at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1319) at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1311) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1311) at org.apache.hadoop.mapreduce.split.SplitMetaInfoReader.readSplitMetaInfo(SplitMetaInfoReader.java:51) at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.createSplits(JobImpl.java:1563

Highlighted

Re: oozie fails in executing hive command

Please provide more information like, are you executing Oozie workflow from coordinator? what other actions are there in oozie workflow and what is the flavor and cluster your are using?

Re: oozie fails in executing hive command

Guru

@nirup reddy try executing your job as user yarn

same answer was provided on this HCC thread

if it's an unsecured cluster, you may check the value of yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users :

  • false will launch containers as the user who submitted the application
  • true will launch containers as user specified in yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user
Don't have an account?
Coming from Hortonworks? Activate your account here