Member since
01-27-2016
14
Posts
10
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
848 | 07-06-2017 10:02 AM | |
2738 | 01-17-2017 05:34 AM | |
799 | 08-30-2016 11:48 AM |
07-21-2017
02:13 PM
1 Kudo
@Ivan Majnaric I think, you can manage and submit Spark2 applications to Oozie. Here are the steps, I would like to suggest you: Create a new directory (spark2) into oozie sharelib and copy the spark2 jar into it. Update the Oozie sharelib using Oozie admin command. Now in you workflow job.properties file use the new spark2 library using "oozie.action.sharelib.for.spark=spark2". Because you will be specifying the spark2 application under spark xml element into workflow.xml It is possible to remove some jars from "oozie" directory of the oozie sharelib because of the jar version conflicts. Hope this will help you and get you going. 🙂
... View more
07-06-2017
10:02 AM
Oozie executes the Shell action script file from a launcher job. The launcher can be allocated to any node in the hadoop cluster, So the error "no such file or directory' means that, launcher job node is different than node2 where you have stored the file. For the 'permission denied' error, please check the proper write permission set for the file, as user can be different while modifying it from oozie shell action. Though it is not advised to modify a local file from a Oozie action, as launched action can be on any node in the cluster. Once you move the file to HDFS, use the proper HDFS commands. You can not modify the file in HDFS, you need to get the file and after modification, put (store) the file again.
... View more
06-07-2017
08:42 AM
HDP uses RM port 8050, why is it 8032 in your screen shots? Can you also provide information about the error message you are seeing?
... View more
06-07-2017
08:37 AM
Why HDFS port is missing in the error or it is a HDFS HA with nameservice?
... View more
06-07-2017
08:24 AM
Order of service restart: Yarn, Oozie. Can check this: https://community.hortonworks.com/questions/52210/hdp-24-oozie-work-flow-ja009-cannot-initialize-clu.html
... View more
01-17-2017
05:58 AM
4 Kudos
When using the Oozie Proxy job submission API for submitting the Oozie Hive, Sqoop and Pig actions. To pass any configuration to the action is required to be in below format. For Hive action: oozie.hive.options.size
: The number of options you'll be passing to Pig oozie.hive.options.n
: An argument to pass to Hive, the 'n' should be an integer starting with 0 to indicate the option number <property>
<name>oozie.hive.options.1</name>
<value>-Doozie.launcher.mapreduce.job.queuename=hive</value>
</property>
<property>
<name>oozie.hive.options.0</name>
<value>-Dmapreduce.job.queuename=hive</value>
</property>
<property>
<name>oozie.hive.options.size</name>
<value>2</value>
</property> For Pig Action:
oozie.pig.options.size
: The number of options you'll be passing to Pig oozie.pig.options.n
: An argument to pass to Pig, the 'n' should be an integer starting with 0 to indicate the option number <property>
<name>oozie.pig.options.1</name>
<value>-Doozie.launcher.mapreduce.job.queuename=pig</value>
</property>
<property>
<name>oozie.pig.options.0</name>
<value>-Dmapreduce.job.queuename=pig</value>
</property>
<property>
<name>oozie.pig.options.size</name>
<value>2</value>
</property> For Sqoop Action: oozie.sqoop.options.size
: The number of options you'll be passing to Sqoop Hadoop job oozie.sqoop.options.n
: An argument to pass to Sqoop hadoop job conf, the 'n' should be an integer starting with zero(0) to indicate the option number <property>
<name>oozie.sqoop.options.1</name>
<value>-Doozie.launcher.mapreduce.job.queuename=sqoop</value>
</property>
<property>
<name>oozie.sqoop.options.0</name>
<value>-Dmapreduce.job.queuename=sqoop</value>
</property>
<property>
<name>oozie.sqoop.options.size</name>
<value>2</value>
</property>
... View more
- Find more articles tagged with:
- Governance & Lifecycle
- How-ToTutorial
- Oozie
Labels:
01-17-2017
05:34 AM
4 Kudos
@fogartyamanda The configuration properties will be passed to the sqoop job in following way: <property>
<name>oozie.sqoop.options.1</name>
<value>-Doozie.launcher.mapreduce.job.queuename=sqoop</value>
</property>
<property>
<name>oozie.sqoop.options.0</name>
<value>-Dmapreduce.job.queuename=sqoop</value>
</property>
<property>
<name>oozie.sqoop.options.size</name>
<value>2</value>
</property>
Thanks.
... View more
10-03-2016
01:11 PM
Can you set this property in the configuration tag of the hive action and try out. That should work.
... View more
09-22-2016
03:30 PM
What is the namenode and jobtracker configuration you are using? Please provide that.
... View more
09-22-2016
02:36 PM
You can ignore this error. It will not have any functionality impact for the application. Currently Oozie ignores the 'wf_id' while comparing the index columns. Further If you want to ignore the extra (unused) tables, columns and indexes to be incorrect, set below in the oozie-site.xml. oozie.service.SchemaCheckerService.ignore.extras=false
... View more
09-21-2016
10:14 AM
You need to check the launcher job for more information. Please check and provide more information from there.
... View more
09-14-2016
08:00 AM
2015-08-0306:43:44,209 INFO CoordActionNotificationXCommand:543- SERVER[sandbox.hortonworks.com] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000000-150803063131195-oozie-oozi-C] ACTION[0000000-150803063131195-oozie-oozi-C@1] STARTED CoordinatorNotification actionId=0000000-150803063131195-oozie-oozi-C@1 : WAITING
The first action from the coordinator is in the waiting state. Are you sure that, it execute for the first coord action and did not work. You need to provide more logs related to the issue. Also E1005 should have some more message than what you provided in the post.
... View more
09-13-2016
06:59 PM
You are executing an oozie admin command, so you need to be using an oozie admin user. Usually 'oozie' user is configured as admin user in oozie You can take look at the 'adminusers.txt' file under oozie configuration. Once you switch to an oozie admin user, you should be able to update the oozie sharelib. 'sudo su - oozie' The another error which you have mentioned regarding the 'java.lang.ClassNotFoundException: Class org.apache.oozie.action.hadoop.SqoopMain not found', I guess you did not specify the 'oozie.use.system.libpath=true' into your "job.properties" file. This should resolve both of your issues. Note: If 'oozie.use.system.libpath=true' property was missing then just add it and submit/run the workflow, you need not to update the oozie share lib.
... View more
08-30-2016
11:48 AM
1 Kudo
Setting of this property does not depend on the type of action. If you do not specify it then it will use the default value of the property. The default value for 'mapreduce.job.queuename' is 'default' only. https://hadoop.apache.org/docs/r2.7.1/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml
... View more