Created 10-23-2015 08:13 PM
can't find any solution to this error, sqoop in the shell script runs fine on the command line but not in Oozie
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.io.FileNotFoundException: File does not exist: hdfs://servername:8020/user/username/.staging/job_1444331888071_2109/job.splitmetainfo at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.createSplits(JobImpl.java:1568) at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.transition(JobImpl.java:1432) at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.transition(JobImpl.java:1390) at org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385) at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302) at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:996) at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:138) at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:1312) at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1080) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$4.run(MRAppMaster.java:1519) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1515) at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1448) Caused by: java.io.FileNotFoundException: File does not exist: hdfs://servername:8020/user/username/.staging/job_1444331888071_2109/job.splitmetainfo at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1309) at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301) at org.apache.hadoop.mapreduce.split.SplitMetaInfoReader.readSplitMetaInfo(SplitMetaInfoReader.java:51) at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.createSplits(JobImpl.java:1563) ... 17 more 2015-10-23 15:45:55,263 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching normal, non-uberized
Created 02-03-2016 04:08 PM
closing this as I was able to write a sqoop action in shell.
Created 10-23-2015 11:33 PM
Have you tried passing the sqoop command with arguments
<action name="sqoopAction"> <sqoop xmlns="uri:oozie:sqoop-action:0.2"> <command>SQOOP COMMAND AND ARGS</command> ... </sqoop> ... </action
Created 10-24-2015 01:00 AM
sqoop is being called from a shell action, no choice in that.
Created 10-24-2015 01:09 AM
Can you add more details? What is your workflow xml like? I am guessing you have already tried embedding the Sqoop command with args in exec tag.
Created 10-24-2015 01:12 AM
Please see this
Created 10-24-2015 01:20 AM
the requirement is to call sqoop command from a shell action, there's looping going on and needs more flexibility than using a sqoop action in Oozie. I am aware of all the possible ways of sqooping in Oozie, for some reason calling a shell action and sqoop within it, throws the error above and my question is what is the fix for that error. Why does it complain about
Created 10-24-2015 01:24 AM
I wonder if user calling the workflow exists in all the nodes and has directory and permission in place.
Could you check?
Created 10-24-2015 01:23 AM
it's actually similar error to this https://issues.apache.org/jira/browse/MAPREDUCE-3056, I wonder if it reared its ugly head in Hadoop 2.7.1
Created 10-24-2015 01:42 AM
That's why I asked "I wonder if user calling the workflow exists in all the nodes and has directory and permission in place.
Could you check?" 🙂
Created 10-24-2015 01:54 AM
yes I'll check again, we're echoing whoami and hostname in the script.