<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Oozie batch to import in Hive from Mysql fail in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Oozie-batch-to-import-in-Hive-from-Mysql-fail/m-p/39419#M38472</link>
    <description>&lt;P&gt;Hi tseader,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Sorry I wasn't avaiable !&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;For update, It works. The problem was the "Dynamic ressrouce pool".&lt;/P&gt;&lt;P&gt;I create a resource pool for my username, and now the job is starting and runing.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;It was different from our Cloudera 4 in how it works...&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;So now the job is runing, doing the sqoop and the hive job, and terminate successfuly ! Great news!&amp;nbsp;&lt;/P&gt;&lt;P&gt;But it very slow for a small table import, I think there is something to do in Dynamic resource pool or yarn setting to use more resource cause, during the job, cpu/emory of my 2 datanode was very less...&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Maybe you can give me some informations on how to calculate the the max container possible ?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;To give you some answer:&lt;/P&gt;&lt;P&gt;- Yes sqoop was working alone.&lt;/P&gt;&lt;P&gt;- Yes our analytics use &amp;lt;args&amp;gt; cause sometime in CDH4 with &amp;lt;command&amp;gt;, they were some error with specific caracters.&lt;/P&gt;&lt;P&gt;- Now yes, sqoop/oozie/hive works now. We will try Impala now&lt;/P&gt;&lt;P&gt;- No we doesn't try to create a workflow since Hue. I will see with our dev about that.&lt;/P&gt;&lt;P&gt;- Not, didn't try with another db.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;As you thinking, the problem wasn't come from the workflow but the configuration.&lt;/P&gt;&lt;P&gt;I'm new in Cloudera/Hadoop, so I learn! I discover the configuration with time!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Now I've to find the best configuration to a better usage of our datanode...&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks again tseader!&lt;/P&gt;</description>
    <pubDate>Wed, 06 Apr 2016 22:29:57 GMT</pubDate>
    <dc:creator>fmorcamp</dc:creator>
    <dc:date>2016-04-06T22:29:57Z</dc:date>
    <item>
      <title>Oozie batch to import in Hive from Mysql fail</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Oozie-batch-to-import-in-Hive-from-Mysql-fail/m-p/39144#M38466</link>
      <description>&lt;P&gt;Hello everyone,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I come with an error from one of our job which is not really explicit...&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I find another topic about the same error but seems to haven't the same origin.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;To reproduce the problem I start from my VM this oozie job (we have a standalone labs in Cloudera 5.5.2 remote server)&lt;/P&gt;&lt;P&gt;The command to start the job:&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;oozie job -oozie http://host.domain.com:11000/oozie -config config-default.xml -run&lt;/PRE&gt;&lt;P&gt;The content of config-default.xml file:&lt;/P&gt;&lt;PRE&gt;&amp;lt;configuration&amp;gt;
	&amp;lt;property&amp;gt;&amp;lt;name&amp;gt;job_tracker&amp;lt;/name&amp;gt;&amp;lt;value&amp;gt;host.domain.com:8032&amp;lt;/value&amp;gt;&amp;lt;/property&amp;gt;
	&amp;lt;property&amp;gt;&amp;lt;name&amp;gt;job_xml&amp;lt;/name&amp;gt;&amp;lt;value&amp;gt;/path/to/file/hive-site.xml&amp;lt;/value&amp;gt;&amp;lt;/property&amp;gt;
	&amp;lt;property&amp;gt;&amp;lt;name&amp;gt;name_node&amp;lt;/name&amp;gt;&amp;lt;value&amp;gt;hdfs://host.domain.com:8020&amp;lt;/value&amp;gt;&amp;lt;/property&amp;gt;
	&amp;lt;property&amp;gt;&amp;lt;name&amp;gt;oozie.libpath&amp;lt;/name&amp;gt;&amp;lt;value&amp;gt;${name_node}/user/oozie/share/lib/lib_20160216173849&amp;lt;/value&amp;gt;&amp;lt;/property&amp;gt;
 	&amp;lt;property&amp;gt;&amp;lt;name&amp;gt;oozie.use.system.libpath&amp;lt;/name&amp;gt;&amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;&amp;lt;/property&amp;gt;
	&amp;lt;property&amp;gt;&amp;lt;name&amp;gt;oozie.wf.application.path&amp;lt;/name&amp;gt;&amp;lt;value&amp;gt;${name_node}/path/to/file/simple-etl-wf.xml&amp;lt;/value&amp;gt;&amp;lt;/property&amp;gt;
	&amp;lt;property&amp;gt;&amp;lt;name&amp;gt;db_user&amp;lt;/name&amp;gt;&amp;lt;value&amp;gt;user&amp;lt;/value&amp;gt;&amp;lt;/property&amp;gt;
	&amp;lt;property&amp;gt;&amp;lt;name&amp;gt;db_pass&amp;lt;/name&amp;gt;&amp;lt;value&amp;gt;password&amp;lt;/value&amp;gt;&amp;lt;/property&amp;gt;
	&amp;lt;property&amp;gt;&amp;lt;name&amp;gt;target_dir&amp;lt;/name&amp;gt;&amp;lt;value&amp;gt;/path/to/destination&amp;lt;/value&amp;gt;&amp;lt;/property&amp;gt;
	&amp;lt;property&amp;gt;&amp;lt;name&amp;gt;hive_db_schema&amp;lt;/name&amp;gt;&amp;lt;value&amp;gt;default&amp;lt;/value&amp;gt;&amp;lt;/property&amp;gt;
	&amp;lt;property&amp;gt;&amp;lt;name&amp;gt;table_suffix&amp;lt;/name&amp;gt;&amp;lt;value&amp;gt;specific_suffix&amp;lt;/value&amp;gt;&amp;lt;/property&amp;gt;
&amp;lt;/configuration&amp;gt;&lt;/PRE&gt;&lt;P&gt;Try to set the "job"tracker" with http:// but we have the same error.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The content of the simple-etl-wf.xml file&lt;/P&gt;&lt;PRE&gt;&amp;lt;workflow-app xmlns="uri:oozie:workflow:0.5" name="simple-etl-wf"&amp;gt;
    &amp;lt;global&amp;gt;
	&amp;lt;job-tracker&amp;gt;${name_node}&amp;lt;/job-tracker&amp;gt;
	&amp;lt;name-node&amp;gt;${job_tracker}&amp;lt;/name-node&amp;gt;
	&amp;lt;job-xml&amp;gt;${job_xml}&amp;lt;/job-xml&amp;gt;
    &amp;lt;/global&amp;gt;


    &amp;lt;start to="extract"/&amp;gt;

    &amp;lt;fork name="extract"&amp;gt;
	&amp;lt;path start="table" /&amp;gt;
    &amp;lt;/fork&amp;gt;

    &amp;lt;action name="table"&amp;gt;
        &amp;lt;sqoop xmlns="uri:oozie:sqoop-action:0.4"&amp;gt;
          &amp;lt;arg&amp;gt;import&amp;lt;/arg&amp;gt;
	  &amp;lt;arg&amp;gt;--connect&amp;lt;/arg&amp;gt;
	  &amp;lt;arg&amp;gt;jdbc:mysql://db.domain.com/database&amp;lt;/arg&amp;gt;
	  &amp;lt;arg&amp;gt;username&amp;lt;/arg&amp;gt;
	  &amp;lt;arg&amp;gt;${db_user}&amp;lt;/arg&amp;gt;
	  &amp;lt;arg&amp;gt;password&amp;lt;/arg&amp;gt;
	  &amp;lt;arg&amp;gt;${db_pass}&amp;lt;/arg&amp;gt;
 	  &amp;lt;arg&amp;gt;--table&amp;lt;/arg&amp;gt;
	  &amp;lt;arg&amp;gt;table&amp;lt;/arg&amp;gt;
	  &amp;lt;arg&amp;gt;--target-dir&amp;lt;/arg&amp;gt;
	  &amp;lt;arg&amp;gt;${target_dir}/table&amp;lt;/arg&amp;gt;
	  &amp;lt;arg&amp;gt;--split-by&amp;lt;/arg&amp;gt;
	  &amp;lt;arg&amp;gt;column&amp;lt;/arg&amp;gt;
	  &amp;lt;arg&amp;gt;--hive-import&amp;lt;/arg&amp;gt;
	  &amp;lt;arg&amp;gt;--hive-overwrite&amp;lt;/arg&amp;gt;
	  &amp;lt;arg&amp;gt;--hive-table&amp;lt;/arg&amp;gt;
 	  &amp;lt;arg&amp;gt;${hive_db_schema}.table_${table_suffix}&amp;lt;/arg&amp;gt;
        &amp;lt;/sqoop&amp;gt;
        &amp;lt;ok to="join"/&amp;gt;
        &amp;lt;error to="fail"/&amp;gt;
    &amp;lt;/action&amp;gt;
    
    &amp;lt;join name="join" to="transform" /&amp;gt;

    &amp;lt;action name="transform"&amp;gt;
        &amp;lt;hive xmlns="uri:oozie:hive-action:0.4"&amp;gt;
           &amp;lt;script&amp;gt;script.hql&amp;lt;/script&amp;gt;
 	   &amp;lt;param&amp;gt;hive_db_schema=${hive_db_schema}&amp;lt;/param&amp;gt;
	   &amp;lt;param&amp;gt;table_suffix=${table_suffix}&amp;lt;/param&amp;gt;
        &amp;lt;/hive&amp;gt;
        &amp;lt;ok to="end"/&amp;gt;
        &amp;lt;error to="fail"/&amp;gt;
    &amp;lt;/action&amp;gt;
    
    &amp;lt;kill name="fail"&amp;gt;
        &amp;lt;message&amp;gt;Hive failed, error message[${wf:errorMessage(wf:lastErrorNode())}]&amp;lt;/message&amp;gt;
    &amp;lt;/kill&amp;gt;
    &amp;lt;end name="end"/&amp;gt;
&amp;lt;/workflow-app&amp;gt;&lt;/PRE&gt;&lt;P&gt;The job is start, but it block to 20% about. And we have this error:&lt;/P&gt;&lt;PRE&gt;JA009: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.&lt;/PRE&gt;&lt;PRE&gt;2016-03-29 15:45:17,149 WARN org.apache.oozie.command.wf.ActionStartXCommand: SERVER[host.domain.com] USER[username] GROUP[-] TOKEN[] APP[simple-etl-wf] JOB[0000004-160325161246127-oozie-oozi-W] ACTION[0000004-160325161246127-oozie-oozi-W@session] Error starting action [session]. ErrorType [TRANSIENT], ErrorCode [JA009], Message [JA009: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.]
org.apache.oozie.action.ActionExecutorException: JA009: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
	at org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:454)
	at org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:434)
	at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1032)
	at org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1203)
	at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:250)
	at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:64)
	at org.apache.oozie.command.XCommand.call(XCommand.java:286)
	at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:321)
	at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:250)
	at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:175)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
	at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)
	at org.apache.hadoop.mapreduce.Cluster.&amp;lt;init&amp;gt;(Cluster.java:82)
	at org.apache.hadoop.mapreduce.Cluster.&amp;lt;init&amp;gt;(Cluster.java:75)
	at org.apache.hadoop.mapred.JobClient.init(JobClient.java:472)
	at org.apache.hadoop.mapred.JobClient.&amp;lt;init&amp;gt;(JobClient.java:450)
	at org.apache.oozie.service.HadoopAccessorService$3.run(HadoopAccessorService.java:436)
	at org.apache.oozie.service.HadoopAccessorService$3.run(HadoopAccessorService.java:434)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
	at org.apache.oozie.service.HadoopAccessorService.createJobClient(HadoopAccessorService.java:434)
	at org.apache.oozie.action.hadoop.JavaActionExecutor.createJobClient(JavaActionExecutor.java:1246)
	at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:980)
	... 10 more&lt;/PRE&gt;&lt;P&gt;Or the job_tracker and name_node have the good url and path. The mysql-connector-java.jar is present in the sharlib folder.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I put oozie un debug mode but no more information about that.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The "mapreduce.framework.name" is set to "yarn on each xml configuration file the the cluster."&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Have you any idea about this error ?&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 10:11:28 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Oozie-batch-to-import-in-Hive-from-Mysql-fail/m-p/39144#M38466</guid>
      <dc:creator>fmorcamp</dc:creator>
      <dc:date>2022-09-16T10:11:28Z</dc:date>
    </item>
    <item>
      <title>Re: Oozie batch to import in Hive from Mysql fail</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Oozie-batch-to-import-in-Hive-from-Mysql-fail/m-p/39230#M38467</link>
      <description>&lt;P&gt;Just to check, shouldn't the username and password have double-hyphens in the Sqoop args or does it not matter?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Just want to eliminate any confounding variables &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 31 Mar 2016 22:20:04 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Oozie-batch-to-import-in-Hive-from-Mysql-fail/m-p/39230#M38467</guid>
      <dc:creator>tseader</dc:creator>
      <dc:date>2016-03-31T22:20:04Z</dc:date>
    </item>
    <item>
      <title>Re: Oozie batch to import in Hive from Mysql fail</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Oozie-batch-to-import-in-Hive-from-Mysql-fail/m-p/39232#M38468</link>
      <description>&lt;P&gt;Hi tseader !&amp;nbsp;Thanks for your help !&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Good eyes! Yes I think it could be a error ans stop the process. I modify these 2 settings but the problem still present.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;It seems to be before.&lt;/P&gt;&lt;P&gt;It seems to read this file, but never start the mysql connection process.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have something else in oozie logs.&lt;/P&gt;&lt;P&gt;When I launch the command on my VM, the workflow appears in Hue.&lt;/P&gt;&lt;P&gt;But the log start with theses 2 lines:&lt;/P&gt;&lt;PRE&gt;2016-03-31 19:04:18,709 WARN org.apache.oozie.util.ParameterVerifier: SERVER[hostname] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] The application does not define formal parameters in its XML definition
2016-03-31 19:04:18,744 WARN org.apache.oozie.service.LiteWorkflowAppService: SERVER[hostname] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] libpath [hdfs://hostname.domain.com:8020/path/to/oozie/lib] does not exist&lt;/PRE&gt;&lt;P&gt;Or its not the libpath that I give in th my job file...&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The complete log, since the job is started to the end.&lt;/P&gt;&lt;PRE&gt;2016-03-31 19:04:18,709 WARN org.apache.oozie.util.ParameterVerifier: SERVER[hostname] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] The application does not define formal parameters in its XML definition
2016-03-31 19:04:18,744 WARN org.apache.oozie.service.LiteWorkflowAppService: SERVER[hostname] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] libpath [hdfs://hostname.domain.com:8020/path/to/oozie/lib] does not exist
2016-03-31 19:04:18,805 INFO org.apache.oozie.command.wf.ActionStartXCommand: SERVER[hostname] USER[username] GROUP[-] TOKEN[] APP[simple-etl-wf] JOB[0000001-160331185825562-oozie-oozi-W] ACTION[0000001-160331185825562-oozie-oozi-W@:start:] Start action [0000001-160331185825562-oozie-oozi-W@:start:] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10]
2016-03-31 19:04:18,809 INFO org.apache.oozie.command.wf.ActionStartXCommand: SERVER[hostname] USER[username] GROUP[-] TOKEN[] APP[simple-etl-wf] JOB[0000001-160331185825562-oozie-oozi-W] ACTION[0000001-160331185825562-oozie-oozi-W@:start:] [***0000001-160331185825562-oozie-oozi-W@:start:***]Action status=DONE
2016-03-31 19:04:18,809 INFO org.apache.oozie.command.wf.ActionStartXCommand: SERVER[hostname] USER[username] GROUP[-] TOKEN[] APP[simple-etl-wf] JOB[0000001-160331185825562-oozie-oozi-W] ACTION[0000001-160331185825562-oozie-oozi-W@:start:] [***0000001-160331185825562-oozie-oozi-W@:start:***]Action updated in DB!
2016-03-31 19:04:18,898 INFO org.apache.oozie.command.wf.ActionStartXCommand: SERVER[hostname] USER[username] GROUP[-] TOKEN[] APP[simple-etl-wf] JOB[0000001-160331185825562-oozie-oozi-W] ACTION[0000001-160331185825562-oozie-oozi-W@extract] Start action [0000001-160331185825562-oozie-oozi-W@extract] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10]
2016-03-31 19:04:18,907 INFO org.apache.oozie.command.wf.ActionStartXCommand: SERVER[hostname] USER[username] GROUP[-] TOKEN[] APP[simple-etl-wf] JOB[0000001-160331185825562-oozie-oozi-W] ACTION[0000001-160331185825562-oozie-oozi-W@extract] [***0000001-160331185825562-oozie-oozi-W@extract***]Action 
2016-03-31 19:04:18,907 INFO org.apache.oozie.command.wf.ActionStartXCommand: SERVER[hostname] USER[username] GROUP[-] TOKEN[] APP[simple-etl-wf] JOB[0000001-160331185825562-oozie-oozi-W] ACTION[0000001-160331185825562-oozie-oozi-W@extract] [***0000001-160331185825562-oozie-oozi-W@extract***]Action updated in DB!
2016-03-31 19:04:19,077 INFO org.apache.oozie.command.wf.ActionStartXCommand: SERVER[hostname] USER[username] GROUP[-] TOKEN[] APP[simple-etl-wf] JOB[0000001-160331185825562-oozie-oozi-W] ACTION[0000001-160331185825562-oozie-oozi-W@session] Start action [0000001-160331185825562-oozie-oozi-W@session] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10]
2016-03-31 19:04:22,804 WARN org.apache.hadoop.security.UserGroupInformation: SERVER[hostname] PriviledgedActionException as:username (auth:PROXY) via oozie (auth:SIMPLE) cause:org.apache.hadoop.fs.UnsupportedFileSystemException: No AbstractFileSystem for scheme: httpstatus=DONE
2016-03-31 19:04:22,805 WARN org.apache.hadoop.security.UserGroupInformation: SERVER[hostname] PriviledgedActionException as:username (auth:PROXY) via oozie (auth:SIMPLE) cause:java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
2016-03-31 19:04:22,805 WARN org.apache.oozie.command.wf.ActionStartXCommand: SERVER[hostname] USER[username] GROUP[-] TOKEN[] APP[simple-etl-wf] JOB[0000001-160331185825562-oozie-oozi-W] ACTION[0000001-160331185825562-oozie-oozi-W@session] Error starting action [session]. ErrorType [TRANSIENT], ErrorCode [JA009], Message [JA009: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.]
org.apache.oozie.action.ActionExecutorException: JA009: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
        at org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:454)
        at org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:434)
        at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1032)
        at org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1203)
        at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:250)
        at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:64)
        at org.apache.oozie.command.XCommand.call(XCommand.java:286)
        at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:321)
        at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:250)
        at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:175)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
        at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)
        at org.apache.hadoop.mapreduce.Cluster.&amp;lt;init&amp;gt;(Cluster.java:82)
        at org.apache.hadoop.mapreduce.Cluster.&amp;lt;init&amp;gt;(Cluster.java:75)
        at org.apache.hadoop.mapred.JobClient.init(JobClient.java:472)
        at org.apache.hadoop.mapred.JobClient.&amp;lt;init&amp;gt;(JobClient.java:450)
        at org.apache.oozie.service.HadoopAccessorService$3.run(HadoopAccessorService.java:436)
        at org.apache.oozie.service.HadoopAccessorService$3.run(HadoopAccessorService.java:434)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
        at org.apache.oozie.service.HadoopAccessorService.createJobClient(HadoopAccessorService.java:434)
        at org.apache.oozie.action.hadoop.JavaActionExecutor.createJobClient(JavaActionExecutor.java:1246)
        at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:980)&lt;/PRE&gt;&lt;P&gt;Maybe it can give you an idea !&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I check hostname in configuration file, but it seems to be ok.&amp;nbsp;&lt;/P&gt;&lt;P&gt;This error message is not very clear... &amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 31 Mar 2016 23:25:27 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Oozie-batch-to-import-in-Hive-from-Mysql-fail/m-p/39232#M38468</guid>
      <dc:creator>fmorcamp</dc:creator>
      <dc:date>2016-03-31T23:25:27Z</dc:date>
    </item>
    <item>
      <title>Re: Oozie batch to import in Hive from Mysql fail</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Oozie-batch-to-import-in-Hive-from-Mysql-fail/m-p/39233#M38469</link>
      <description>&lt;P&gt;The config-default.xml has "host.domain.com" because you wanted to generalize it, right? &amp;nbsp;I'm assuming you've tried localhost with the proper port in your job_tracker and name_node values?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 01 Apr 2016 00:03:06 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Oozie-batch-to-import-in-Hive-from-Mysql-fail/m-p/39233#M38469</guid>
      <dc:creator>tseader</dc:creator>
      <dc:date>2016-04-01T00:03:06Z</dc:date>
    </item>
    <item>
      <title>Re: Oozie batch to import in Hive from Mysql fail</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Oozie-batch-to-import-in-Hive-from-Mysql-fail/m-p/39279#M38470</link>
      <description>&lt;P&gt;Yes we always used the real fqdn to start job.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;And we find the mistake. The "job_tracker" was on "name_node" job and vice-versa...&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;So we change them to be valid, and the process start but it doesn't complish anymore.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We see the worklows in "running" status, and see a job "oozie:launcher" on running state. It create an "oozie:action" task, but this last stay on the "accepted" status. And I don't find why.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I try some settings in Yarn memory configuration with no success.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;In RessourceManager, I can find this log about he job:&lt;/P&gt;&lt;PRE&gt;&amp;gt;&amp;gt;&amp;gt; Invoking Sqoop command line now &amp;gt;&amp;gt;&amp;gt;

4624 [uber-SubtaskRunner] WARN  org.apache.sqoop.tool.SqoopTool  - $SQOOP_CONF_DIR has not been set in the environment. Cannot check for additional configuration.
4654 [uber-SubtaskRunner] INFO  org.apache.sqoop.Sqoop  - Running Sqoop version: 1.4.6-cdh5.5.2
4671 [uber-SubtaskRunner] WARN  org.apache.sqoop.tool.BaseSqoopTool  - Setting your password on the command-line is insecure. Consider using -P instead.
4672 [uber-SubtaskRunner] INFO  org.apache.sqoop.tool.BaseSqoopTool  - Using Hive-specific delimiters for output. You can override
4672 [uber-SubtaskRunner] INFO  org.apache.sqoop.tool.BaseSqoopTool  - delimiters with --fields-terminated-by, etc.
4690 [uber-SubtaskRunner] WARN  org.apache.sqoop.ConnFactory  - $SQOOP_CONF_DIR has not been set in the environment. Cannot check for additional configuration.
4816 [uber-SubtaskRunner] INFO  org.apache.sqoop.manager.MySQLManager  - Preparing to use a MySQL streaming resultset.
4820 [uber-SubtaskRunner] INFO  org.apache.sqoop.tool.CodeGenTool  - Beginning code generation
5360 [uber-SubtaskRunner] INFO  org.apache.sqoop.manager.SqlManager  - Executing SQL statement: SELECT t.* FROM `table` AS t LIMIT 1
5521 [uber-SubtaskRunner] INFO  org.apache.sqoop.manager.SqlManager  - Executing SQL statement: SELECT t.* FROM `table` AS t LIMIT 1
5616 [uber-SubtaskRunner] INFO  org.apache.sqoop.orm.CompilationManager  - HADOOP_MAPRED_HOME is /usr/lib/hadoop-mapreduce
7274 [uber-SubtaskRunner] INFO  org.apache.sqoop.orm.CompilationManager  - Writing jar file: /tmp/sqoop-yarn/compile/f695dd68db2ed1ecf703a5405d308df5/table.jar
7282 [uber-SubtaskRunner] WARN  org.apache.sqoop.manager.MySQLManager  - It looks like you are importing from mysql.
7282 [uber-SubtaskRunner] WARN  org.apache.sqoop.manager.MySQLManager  - This transfer can be faster! Use the --direct
7282 [uber-SubtaskRunner] WARN  org.apache.sqoop.manager.MySQLManager  - option to exercise a MySQL-specific fast path.
7282 [uber-SubtaskRunner] INFO  org.apache.sqoop.manager.MySQLManager  - Setting zero DATETIME behavior to convertToNull (mysql)
7284 [uber-SubtaskRunner] INFO  org.apache.sqoop.mapreduce.ImportJobBase  - Beginning import of game_session
7398 [uber-SubtaskRunner] WARN  org.apache.sqoop.mapreduce.JobBase  - SQOOP_HOME is unset. May not be able to find all job dependencies.
8187 [uber-SubtaskRunner] INFO  org.apache.sqoop.mapreduce.db.DBInputFormat  - Using read commited transaction isolation
8211 [uber-SubtaskRunner] INFO  org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat  - BoundingValsQuery: SELECT MIN(`session_id`), MAX(`session_id`) FROM `table`
8237 [uber-SubtaskRunner] INFO  org.apache.sqoop.mapreduce.db.IntegerSplitter  - Split size: 9811415567004; Num splits: 4 from: 14556292800030657 to: 14595538462298675
Heart beat
Heart beat
Heart beat
Heart beat
Heart beat
Heart beat
Heart beat
Heart beat&lt;/PRE&gt;&lt;P&gt;The status is looping on "Heart beat" logs.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I don't know if it can come from memory configuration, or anything else...&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Have you an idea about that ?&lt;/P&gt;</description>
      <pubDate>Fri, 01 Apr 2016 23:47:48 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Oozie-batch-to-import-in-Hive-from-Mysql-fail/m-p/39279#M38470</guid>
      <dc:creator>fmorcamp</dc:creator>
      <dc:date>2016-04-01T23:47:48Z</dc:date>
    </item>
    <item>
      <title>Re: Oozie batch to import in Hive from Mysql fail</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Oozie-batch-to-import-in-Hive-from-Mysql-fail/m-p/39302#M38471</link>
      <description>&lt;P&gt;It's not clear to me what is going on.&amp;nbsp; What I recommend if no others have solutions, is to simplify the scenario and start eliminating variables.&amp;nbsp; Some additional questions I have that may help:&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;I'm assuming this sqoop import works outside of Oozie?&lt;/LI&gt;&lt;LI&gt;Just curious, why do you have the sqoop command in &amp;lt;args&amp;gt; and not &amp;lt;command&amp;gt;?&amp;nbsp; I don't see a free-form query so &amp;lt;args&amp;gt; isn't needed. (just curious) &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/LI&gt;&lt;LI&gt;Do other action types work fine?&lt;/LI&gt;&lt;LI&gt;Have you tried using Hue to create a workflow with the sqoop action, and see if that works?&amp;nbsp;Hue will generate the workflow and job.properties for you, and that might give you something to compare to in order to focus in on where the problem lies.&lt;/LI&gt;&lt;LI&gt;Have you tried connecting to a different database to see if that has problems as well?&lt;/LI&gt;&lt;LI&gt;(thinking in text here)&amp;nbsp; Because you can reproduce the problem in two environments, I'm thinking it's more than likely a problem with the workflow itself, and less of a problem with the infrastructure management, unless both environments are mirrors of each other in Cloudera Manager or something.&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 03 Apr 2016 23:53:42 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Oozie-batch-to-import-in-Hive-from-Mysql-fail/m-p/39302#M38471</guid>
      <dc:creator>tseader</dc:creator>
      <dc:date>2016-04-03T23:53:42Z</dc:date>
    </item>
    <item>
      <title>Re: Oozie batch to import in Hive from Mysql fail</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Oozie-batch-to-import-in-Hive-from-Mysql-fail/m-p/39419#M38472</link>
      <description>&lt;P&gt;Hi tseader,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Sorry I wasn't avaiable !&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;For update, It works. The problem was the "Dynamic ressrouce pool".&lt;/P&gt;&lt;P&gt;I create a resource pool for my username, and now the job is starting and runing.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;It was different from our Cloudera 4 in how it works...&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;So now the job is runing, doing the sqoop and the hive job, and terminate successfuly ! Great news!&amp;nbsp;&lt;/P&gt;&lt;P&gt;But it very slow for a small table import, I think there is something to do in Dynamic resource pool or yarn setting to use more resource cause, during the job, cpu/emory of my 2 datanode was very less...&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Maybe you can give me some informations on how to calculate the the max container possible ?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;To give you some answer:&lt;/P&gt;&lt;P&gt;- Yes sqoop was working alone.&lt;/P&gt;&lt;P&gt;- Yes our analytics use &amp;lt;args&amp;gt; cause sometime in CDH4 with &amp;lt;command&amp;gt;, they were some error with specific caracters.&lt;/P&gt;&lt;P&gt;- Now yes, sqoop/oozie/hive works now. We will try Impala now&lt;/P&gt;&lt;P&gt;- No we doesn't try to create a workflow since Hue. I will see with our dev about that.&lt;/P&gt;&lt;P&gt;- Not, didn't try with another db.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;As you thinking, the problem wasn't come from the workflow but the configuration.&lt;/P&gt;&lt;P&gt;I'm new in Cloudera/Hadoop, so I learn! I discover the configuration with time!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Now I've to find the best configuration to a better usage of our datanode...&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks again tseader!&lt;/P&gt;</description>
      <pubDate>Wed, 06 Apr 2016 22:29:57 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Oozie-batch-to-import-in-Hive-from-Mysql-fail/m-p/39419#M38472</guid>
      <dc:creator>fmorcamp</dc:creator>
      <dc:date>2016-04-06T22:29:57Z</dc:date>
    </item>
  </channel>
</rss>

