Member since
04-03-2019
962
Posts
1743
Kudos Received
146
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
10990 | 03-08-2019 06:33 PM | |
4765 | 02-15-2019 08:47 PM | |
4080 | 09-26-2018 06:02 PM | |
10397 | 09-07-2018 10:33 PM | |
5479 | 04-25-2018 01:55 AM |
04-25-2018
01:24 AM
@Jeff Rosenberg
You will have to pass -x tez as an argument in your workflow.xml to run pig job via TEZ ... Your workflow.xml ...
<script>yourscript.pig</script>
<argument>-x</argument>
<argument>tez</argument>
... Your workflow.xml ...
... View more
04-19-2018
09:40 PM
Thanks @Chad Woodhead - Updated! 🙂
... View more
04-16-2018
05:33 PM
2 Kudos
@Pandu123
P
Please change jobTracker=hdn04.abc.com:8021 to jobTracker=hdn04.abc.com:8032 and try again.
... View more
04-09-2018
06:10 PM
1 Kudo
@Harendra Sinha - No Oozie WebUI is readonly. You can have a look at Workflow manager from Ambari which has great features to design/run/re-run oozie workflows.
... View more
12-29-2017
02:35 AM
Tip! 🙂 Please make sure to add below line in hbase-indexer-env.sh in order to avoid org.apache.zookeeper.KeeperException$NoAuthException:KeeperErrorCode=NoAuthfor/hbase-secure/blah blah error HBASE_INDEXER_OPTS="$HBASE_INDEXER_OPTS -Djava.security.auth.login.config=<path-of-indexer-jass-file>"
... View more
11-09-2017
06:46 AM
Changing REALM to UPPERCASE in Ambari helps. No need to change in AD(it worked for me on windows server 2012 r2)
... View more
10-31-2017
09:28 PM
@amarnath reddy pappu - I believe we will have to regenerate war file in secure mode and restart Oozie service again. su -l oozie -c "/usr/hdp/current/oozie-server/bin/oozie-setup.sh prepare-war -secure"
... View more
10-27-2017
10:27 PM
Thank you so much @Mridul M 🙂
... View more
10-27-2017
06:23 PM
@Mridul M Can you please update point number 2 in Ambari managed section? Current point - Add DataNucleus jars to the Spark Thrift Server classpath. Navigate to the “Advanced spark-hive-site-override” section and add: Modification - Add DataNucleus jars to the Spark Thrift Server classpath. Navigate to the “Custom spark-thrift-sparkconf” section and add: Thanks, Kuldeep
... View more
10-16-2017
10:02 PM
1 Kudo
Please follow below steps for running SparkR script via Oozie . 1. Install R packages on all the node managers yum -y install R R-devel libcurl-devel openssl-devel . 2. Keep your R script ready Here is the sample script library(SparkR)
sc <- sparkR.init(appName="SparkR-sample")
sqlContext <- sparkRSQL.init(sc)
localDF <- data.frame(name=c("ABC", "blah", "blah"), age=c(39, 32, 81))
df <- createDataFrame(sqlContext, localDF)
printSchema(df)
sparkR.stop() . 3. Create workflow.xml Here is the working example: <workflow-app xmlns='uri:oozie:workflow:0.5' name='SparkFileCopy'>
<global>
<configuration>
<property>
<name>oozie.launcher.yarn.app.mapreduce.am.env</name>
<value>SPARK_HOME=/usr/hdp/2.5.3.0-37/spark</value>
</property>
<property>
<name>oozie.launcher.mapred.child.env</name>
<value>SPARK_HOME=/usr/hdp/2.5.3.0-37/spark</value>
</property>
</configuration>
</global>
<start to='spark-node' />
<action name='spark-node'>
<spark xmlns="uri:oozie:spark-action:0.1">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<prepare>
<delete path="${nameNode}/user/${wf:user()}/${examplesRoot}/output-data/spark"/>
</prepare>
<master>${master}</master>
<name>SparkR</name>
<jar>${nameNode}/user/${wf:user()}/spark.R</jar>
<spark-opts>--driver-memory 512m --conf spark.driver.extraJavaOptions=-Dhdp.version=2.5.3.0</spark-opts>
</spark>
<ok to="end" />
<error to="fail" />
</action>
<kill name="fail">
<message>Workflow failed, error
message[${wf:errorMessage(wf:lastErrorNode())}]
</message>
</kill>
<end name='end' />
</workflow-app> . 4. Make sure that you don't have sparkr.zip in workflow/lib directory or Oozie sharelib or in <file> tag in the workflow, or else it will cause conflicts. . Upload workflow to hdfs and run it. It should work. This has been successfully tested on HDP-2.5.X & HDP-2.6.X . Please comment if you have any feedback/questions/suggestions. Happy Hadooping!! Reference - https://developer.ibm.com/hadoop/2017/06/30/scheduling-spark-job-written-pyspark-sparkr-yarn-oozie
... View more
Labels: