Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

org.apache.oozie.action.ActionExecutorException: JA009: Cannot initialize Cluster.

avatar
Expert Contributor

I am using ambari installation and trying to run a coordinator oozie job that imports data to hive using sqoop.

I have them installed, up and running on server.

My workflow.xml looks like this:

  <workflow-app name="once-a-day" xmlns="uri:oozie:workflow:0.1">
<start to="sqoopAction"/>
        <action name="sqoopAction">
                <sqoop xmlns="uri:oozie:sqoop-action:0.2">
                        <job-tracker>${jobTracker}</job-tracker>
                        <name-node>${nameNode}</name-node>
                    <command>import-all-tables --connect jdbc:mysql://HOST_NAME/erp --username hiveusername --password hivepassword
                        --</command>
                </sqoop>
                <ok to="end"/>
                <error to="killJob"/>
        </action>
<kill name="killJob">
            <message>"Killed job due to error: ${wf:errorMessage(wf:lastErrorNode())}"</message>
        </kill>
<end name="end" />
</workflow-app>

I get this error:

How do I fix this? I have tried everything suggested on Internet but nothing fixes it

[0001059-160427195624911-oozie-oozi-W] ACTION[0001059-160427195624911-oozie-oozi-W@sqoopAction] Error starting action [sqoopAction]. ErrorType [TRANSIENT], ErrorCode [JA009], Message [JA009: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.]
org.apache.oozie.action.ActionExecutorException: JA009: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:456)
at org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:436)
at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1139)
at org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1293)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:250)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:64)
at org.apache.oozie.command.XCommand.call(XCommand.java:286)
at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:175)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75)
at org.apache.hadoop.mapred.JobClient.init(JobClient.java:475)
at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:454)
at org.apache.oozie.service.HadoopAccessorService$3.run(HadoopAccessorService.java:462)
at org.apache.oozie.service.HadoopAccessorService$3.run(HadoopAccessorService.java:460)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.oozie.service.HadoopAccessorService.createJobClient(HadoopAccessorService.java:460)
at org.apache.oozie.action.hadoop.JavaActionExecutor.createJobClient(JavaActionExecutor.java:1336)
at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1087)
... 8 more
1 ACCEPTED SOLUTION

avatar
Master Guru

You would have to make sure that the mapreduce.framework.name is set correctly ( yarn I suppose ) and the mapred files are there but first please verify that your nameNode parameter is set correctly. HDFS is very exact about it and requires the hdfs:// in front.

So hdfs://nameonode:8020 instead of namenode:8020

View solution in original post

1 REPLY 1

avatar
Master Guru

You would have to make sure that the mapreduce.framework.name is set correctly ( yarn I suppose ) and the mapred files are there but first please verify that your nameNode parameter is set correctly. HDFS is very exact about it and requires the hdfs:// in front.

So hdfs://nameonode:8020 instead of namenode:8020