<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Oozie workflow, Spark action (using simple Dataframe): &amp;quot;Table not found&amp;quot; error in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oozie-workflow-Spark-action-using-simple-Dataframe-quot/m-p/41357#M28321</link>
    <description>&lt;P&gt;Congratulations on solving your issue and thank you for such a detailed description of the solution.&lt;/P&gt;</description>
    <pubDate>Thu, 26 May 2016 11:53:05 GMT</pubDate>
    <dc:creator>cjervis</dc:creator>
    <dc:date>2016-05-26T11:53:05Z</dc:date>
    <item>
      <title>Oozie workflow, Spark action (using simple Dataframe): "Table not found" error</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oozie-workflow-Spark-action-using-simple-Dataframe-quot/m-p/40834#M28318</link>
      <description>&lt;P&gt;Hi all, my CDH test rig is as follows:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;CDH 5.5.1&lt;/P&gt;&lt;P&gt;Spark 1.5.0&lt;/P&gt;&lt;P&gt;Oozie 4.1.0&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have successfully created a simple Oozie Workflow that spawns a Spark Action using HUE Interface. My intention is to use Yarn in Cluster mode to run the Workflow/Action.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;It's a Python script&lt;/STRONG&gt;, which is as follows (just a test):&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;from pyspark import SparkConf, SparkContext
from pyspark.sql import SQLContext
from pyspark.sql import HiveContext
from pyspark.sql.functions import *

sconf = SparkConf().setAppName("MySpark").set("spark.driver.memory", "1g").setMaster("yarn-cluster")
sc = SparkContext(conf=sconf)

### (1) ALTERNATIVELY USE ONE OF THE FOLLOWING CONTEXT DEFINITIONS:
sqlCtx = SQLContext(sc)
#sqlCtx = HiveContext(sc)

### (2) IF HIVECONTEXT, EVENTUALLY SET THE DATABASE IN USE (SHOULDN'T BE NECESSARY):
#sqlCtx .sql("use default")

### (3) CREATE MAIN DATAFRAME. TRY THE SYNTAXES ALTERNATIVELY, COMBINE WITH DIFFERENT (1):
#cronologico_DF = sqlCtx.table("sales_fact")
cronologico_DF = sqlCtx.sql("select * from sales_fact")

### (4) ANOTHER DATAFRAME
extraction_cronologico_DF = cronologico_DF.select("PRODUCT_KEY")

### (5) USELESS PRINT STATEMENT:
print 'a'&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;When I run the Workflow, a Mapreduce Job is started. Shortly after, a Spark Job&amp;nbsp;is spawned (I can see that from the Job Browser).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The Spark Job fails with the following error (excerpt from the Log File of the Spark Acrion):&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;py4j.protocol.Py4JJavaError: An error occurred while calling o51.sql.
: java.lang.RuntimeException: Table Not Found: sales_fact&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This is my "workflow.xml":&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;&amp;lt;workflow-app name="Churn_2015" xmlns="uri:oozie:workflow:0.5"&amp;gt;
  &amp;lt;global&amp;gt;
      &amp;lt;job-xml&amp;gt;hdfs:///user/hue/oozie/workspaces/hue-oozie-1460736691.98/hive-site.xml&amp;lt;/job-xml&amp;gt;
            &amp;lt;configuration&amp;gt;
                &amp;lt;property&amp;gt;
                    &amp;lt;name&amp;gt;oozie.launcher.yarn.app.mapreduce.am.env&amp;lt;/name&amp;gt;
                    &amp;lt;value&amp;gt;SPARK_HOME=/opt/cloudera/parcels/CDH/lib/spark&amp;lt;/value&amp;gt;
                &amp;lt;/property&amp;gt;
            &amp;lt;/configuration&amp;gt;
  &amp;lt;/global&amp;gt;
    &amp;lt;start to="spark-3ca0"/&amp;gt;
    &amp;lt;kill name="Kill"&amp;gt;
        &amp;lt;message&amp;gt;Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]&amp;lt;/message&amp;gt;
    &amp;lt;/kill&amp;gt;
    &amp;lt;action name="spark-3ca0"&amp;gt;
        &amp;lt;spark xmlns="uri:oozie:spark-action:0.1"&amp;gt;
            &amp;lt;job-tracker&amp;gt;${jobTracker}&amp;lt;/job-tracker&amp;gt;
            &amp;lt;name-node&amp;gt;${nameNode}&amp;lt;/name-node&amp;gt;
              &amp;lt;job-xml&amp;gt;hdfs:///user/hue/oozie/workspaces/hue-oozie-1460736691.98/hive-site.xml&amp;lt;/job-xml&amp;gt;
            &amp;lt;configuration&amp;gt;
                &amp;lt;property&amp;gt;
                    &amp;lt;name&amp;gt;oozie.use.system.libpath&amp;lt;/name&amp;gt;
                    &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
                &amp;lt;/property&amp;gt;
            &amp;lt;/configuration&amp;gt;
            &amp;lt;master&amp;gt;yarn-cluster&amp;lt;/master&amp;gt;
            &amp;lt;mode&amp;gt;cluster&amp;lt;/mode&amp;gt;
            &amp;lt;name&amp;gt;MySpark&amp;lt;/name&amp;gt;
              &amp;lt;class&amp;gt;org.apache.spark.examples.mllib.JavaALS&amp;lt;/class&amp;gt;
            &amp;lt;jar&amp;gt;hdfs:///user/hue/oozie/workspaces/hue-oozie-1460736691.98/lib/test.py&amp;lt;/jar&amp;gt;
        &amp;lt;/spark&amp;gt;
        &amp;lt;ok to="End"/&amp;gt;
        &amp;lt;error to="Kill"/&amp;gt;
    &amp;lt;/action&amp;gt;
    &amp;lt;end name="End"/&amp;gt;
&amp;lt;/workflow-app&amp;gt;&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This is my "job.properties":&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;oozie.use.system.libpath=True
security_enabled=False
dryrun=False
jobTracker=&amp;lt;MY_SERVER_FQDN_HERE&amp;gt;:8032
nameNode=hdfs://&amp;lt;MY_SERVER_FQDN_HERE&amp;gt;:8020&lt;/PRE&gt;&lt;P&gt;Please note that:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;1) I've also uploaded "hive-site.xml" in the same directory as the 2 files described above. As you can see from "workflow.xml", it should also be picked up.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;2) The "test.py" script is under a "lib" directory in the Workspace created by HUE. It gets picked up. In that directory I also took care of uploading several Jars belonging to some Derby DB Connector, probably required to collect Stats, to avoid other exceptions being throwed.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;3) I've tried adding&amp;nbsp;a workflow property "oozie.action.sharelib.for.spark", with value "hcatalog,hive,hive2", with no success&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;4) As you can see in the Python Script described&amp;nbsp;above, I've been using alternatively an SQLContext or a HiveContext object inside the Script. Results are the same (the error message is slightly different though).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;5) ShareLib should be OK too:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;oozie admin -shareliblist

[Available ShareLib]
oozie
hive
distcp
hcatalog
sqoop
mapreduce-streaming
spark
hive2
pig&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I'm suspecting the Tables Metastore is not being read, that's probably the issue. But I ran out of ideas and I'm not able to get it working... Thanks in advance for any feedback!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 10:19:27 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oozie-workflow-Spark-action-using-simple-Dataframe-quot/m-p/40834#M28318</guid>
      <dc:creator>FrozenWave</dc:creator>
      <dc:date>2022-09-16T10:19:27Z</dc:date>
    </item>
    <item>
      <title>Re: Oozie workflow, Spark action (using simple Dataframe): "Table not found" error</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oozie-workflow-Spark-action-using-simple-Dataframe-quot/m-p/40836#M28319</link>
      <description>&lt;P&gt;Update: If I use "spark-submit", the script runs successfully.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Syntax used for "spark-submit":&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;spark-submit \
  --master yarn-cluster \
  --deploy-mode cluster \
  --executor-memory 500M \
  --total-executor-cores 1 \
  hdfs:///user/hue/oozie/workspaces/hue-oozie-1460736691.98/lib/test.py \
  10&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Excerpt from output log:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;16/05/15 00:30:57 INFO parse.ParseDriver: Parsing command: select * from sales_fact
16/05/15 00:30:58 INFO parse.ParseDriver: Parse Completed
16/05/15 00:30:58 INFO client.ClientWrapper: Inspected Hadoop version: 2.6.0-cdh5.5.1
16/05/15 00:30:58 INFO client.ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0-cdh5.5.1
16/05/15 00:30:59 INFO yarn.ApplicationMaster: Final app status: SUCCEEDED, exitCode: 0
16/05/15 00:30:59 INFO spark.SparkContext: Invoking stop() from shutdown hook&lt;/PRE&gt;</description>
      <pubDate>Sat, 14 May 2016 22:39:27 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oozie-workflow-Spark-action-using-simple-Dataframe-quot/m-p/40836#M28319</guid>
      <dc:creator>FrozenWave</dc:creator>
      <dc:date>2016-05-14T22:39:27Z</dc:date>
    </item>
    <item>
      <title>Re: Oozie workflow, Spark action (using simple Dataframe): "Table not found" error</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oozie-workflow-Spark-action-using-simple-Dataframe-quot/m-p/41356#M28320</link>
      <description>&lt;P&gt;Update: I got to a working solution, this is a brief Howto to get to the result:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;JOB MAIN BOX CONFIGURATION (CLICK THE "PENCIL" EDIT ICON&lt;/P&gt;&lt;P&gt;ON TOP OF THE WORKFLOW MAIN SCREEN):&lt;/P&gt;&lt;PRE&gt;Spark Master:			yarn-cluster
Mode:				cluster
App Name:			MySpark
Jars/py files:			hdfs:///user/hue/oozie/workspaces/hue-oozie-1463575878.15/lib/test.py
Main Class:			&amp;lt;WHATEVER_STRING_HERE&amp;gt;  (E.g. "clear", or "org.apache.spark.examples.mllib.JavaALS"). We do not have a Main Class in our ".py" script!
Arguments:			NO ARGUMENTS DEFINED&lt;/PRE&gt;&lt;P&gt;&lt;BR /&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;WORKFLOW SETTINGS (CLICK GEAR ICON ON TOP RIGHT OF&lt;/P&gt;&lt;P&gt;THE WORKFLOW MAIN SCREEN):&lt;/P&gt;&lt;PRE&gt;Variables:			oozie.use.system.libpath --&amp;gt; &lt;STRONG&gt;true&lt;/STRONG&gt;
Workspace:			hue-oozie-1463575878.15
Hadoop Properties:		oozie.launcher.yarn.app.mapreduce.am.env --&amp;gt; &lt;STRONG&gt;SPARK_HOME=/opt/cloudera/parcels/CDH/lib/spark&lt;/STRONG&gt;
Show Graph Arrows:		CHECKED
Version:			uri.oozie.workflow.0.5
Job XML:			EMPTY
SLA Configuration:		UNCHECKED&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;JOB DETAILED CONFIGURATION (CLICK THE "PENCIL" EDIT ICON&lt;/P&gt;&lt;P&gt;ON TOP OF THE WORKFLOW MAIN SCREEN&amp;nbsp;AND THE THE TRIANGULAR&lt;/P&gt;&lt;P&gt;ICON ON TOP RIGHT OF THE MAIN JOB BOX TO EDIT IT IN DETAIL):&lt;/P&gt;&lt;PRE&gt;- PROPERTIES TAB:
-----------------
Options List:			--files hdfs:///user/hue/oozie/workspaces/hue-oozie-1463575878.15/hive-site.xml
Prepare:			NO PREPARE STEPS DEFINED
Job XML:			EMPTY
Properties:			NO PROPERTIES DEFINED
Retry:				NO RETRY OPTIONS DEFINED

- SLA TAB:
----------
Enabled:			UNCHECKED

- CREDENTIALS TAB:
------------------
Credentials:			NO CREDENTIALS DEFINED

- TRANSITIONS TAB:
------------------
Ok				End
Ko				Kill&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;MANUALLY EDIT MINIMAL "hive-site.xml" FILE TO BE PASSED TO THE SPARK-ON-HIVE&lt;/P&gt;&lt;P&gt;CONTAINER&amp;nbsp;TO BE ABLE TO ACCESS THE TABLES METASTORE FROM WHATEVER&lt;/P&gt;&lt;P&gt;NODE IN THE CLUSTER, AND UPLOAD IT TO HDFS:&lt;/P&gt;&lt;PRE&gt;vi hive-site.xml

---
&amp;lt;configuration&amp;gt;
	&amp;lt;property&amp;gt;
		&amp;lt;name&amp;gt;hive.metastore.uris&amp;lt;/name&amp;gt;
		&amp;lt;value&amp;gt;thrift://&lt;STRONG&gt;&amp;lt;THRIFT_HOSTNAME&amp;gt;&lt;/STRONG&gt;:9083&amp;lt;/value&amp;gt;
	&amp;lt;/property&amp;gt;
&amp;lt;/configuration&amp;gt;
---

hdfs dfs -put hive-site.xml /user/hue/oozie/workspaces/hue-oozie-1463575878.15&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;EDIT&amp;nbsp;THE PYSPARK SCRIPT AND UPLOAD IT INTO THE "lib" DIRECTORY&lt;/P&gt;&lt;P&gt;IN THE WORKFLOW FOLDER:&lt;/P&gt;&lt;PRE&gt;vi test.py

---
from pyspark import SparkConf, SparkContext
from pyspark.sql import SQLContext
from pyspark.sql import HiveContext
from pyspark.sql.functions import *

sconf = SparkConf().setAppName("MySpark").set("spark.driver.memory", "1g").setMaster("yarn-cluster")
sc = SparkContext(conf=sconf)

sqlCtx = HiveContext(sc)

xxx_DF = sqlCtx.table("table")
yyy_DF = xxx_DF.select("fieldname").saveAsTable("new_table")
---

hdfs dfs -put test.py /user/hue/oozie/workspaces/hue-oozie-1463575878.15/lib&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;NOW YOU CAN SUBLIT THE WORKFLOW IN YARN:&lt;/P&gt;&lt;PRE&gt;- Click the "PLAY" Submit Icon on top of the screen&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;ADDITIONAL INFO: AUTO-GENERATED "workflow.xml":&lt;/P&gt;&lt;PRE&gt;&amp;lt;workflow-app name="Spark_on_Oozie" xmlns="uri:oozie:workflow:0.5"&amp;gt;
  &amp;lt;global&amp;gt;
            &amp;lt;configuration&amp;gt;
                &amp;lt;property&amp;gt;
                    &amp;lt;name&amp;gt;oozie.launcher.yarn.app.mapreduce.am.env&amp;lt;/name&amp;gt;
                    &amp;lt;value&amp;gt;SPARK_HOME=/opt/cloudera/parcels/CDH/lib/spark&amp;lt;/value&amp;gt;
                &amp;lt;/property&amp;gt;
            &amp;lt;/configuration&amp;gt;
  &amp;lt;/global&amp;gt;
    &amp;lt;start to="spark-9fa1"/&amp;gt;
    &amp;lt;kill name="Kill"&amp;gt;
        &amp;lt;message&amp;gt;Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]&amp;lt;/message&amp;gt;
    &amp;lt;/kill&amp;gt;
    &amp;lt;action name="spark-9fa1"&amp;gt;
        &amp;lt;spark xmlns="uri:oozie:spark-action:0.1"&amp;gt;
            &amp;lt;job-tracker&amp;gt;${jobTracker}&amp;lt;/job-tracker&amp;gt;
            &amp;lt;name-node&amp;gt;${nameNode}&amp;lt;/name-node&amp;gt;
            &amp;lt;master&amp;gt;yarn-cluster&amp;lt;/master&amp;gt;
            &amp;lt;mode&amp;gt;cluster&amp;lt;/mode&amp;gt;
            &amp;lt;name&amp;gt;MySpark&amp;lt;/name&amp;gt;
              &amp;lt;class&amp;gt;clear&amp;lt;/class&amp;gt;
            &amp;lt;jar&amp;gt;hdfs:///user/hue/oozie/workspaces/hue-oozie-1463575878.15/lib/test.py&amp;lt;/jar&amp;gt;
              &amp;lt;spark-opts&amp;gt;--files hdfs:///user/hue/oozie/workspaces/hue-oozie-1463575878.15/hive-site.xml&amp;lt;/spark-opts&amp;gt;
        &amp;lt;/spark&amp;gt;
        &amp;lt;ok to="End"/&amp;gt;
        &amp;lt;error to="Kill"/&amp;gt;
    &amp;lt;/action&amp;gt;
    &amp;lt;end name="End"/&amp;gt;
&amp;lt;/workflow-app&amp;gt;&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;ADDITIONAL INFO: AUTO-GENERATED "job.properties":&lt;/P&gt;&lt;PRE&gt;oozie.use.system.libpath=true
security_enabled=False
dryrun=False
jobTracker=&lt;STRONG&gt;&amp;lt;JOBTRACKER_HOSTNAME&amp;gt;&lt;/STRONG&gt;:8032&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 26 May 2016 11:05:30 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oozie-workflow-Spark-action-using-simple-Dataframe-quot/m-p/41356#M28320</guid>
      <dc:creator>FrozenWave</dc:creator>
      <dc:date>2016-05-26T11:05:30Z</dc:date>
    </item>
    <item>
      <title>Re: Oozie workflow, Spark action (using simple Dataframe): "Table not found" error</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oozie-workflow-Spark-action-using-simple-Dataframe-quot/m-p/41357#M28321</link>
      <description>&lt;P&gt;Congratulations on solving your issue and thank you for such a detailed description of the solution.&lt;/P&gt;</description>
      <pubDate>Thu, 26 May 2016 11:53:05 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oozie-workflow-Spark-action-using-simple-Dataframe-quot/m-p/41357#M28321</guid>
      <dc:creator>cjervis</dc:creator>
      <dc:date>2016-05-26T11:53:05Z</dc:date>
    </item>
    <item>
      <title>Oozie job error</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oozie-workflow-Spark-action-using-simple-Dataframe-quot/m-p/88468#M28322</link>
      <description>&lt;P&gt;Dear All.&lt;/P&gt;&lt;P&gt;I am facing the issue with Oozie while runing simple job with HUE GUI.&lt;/P&gt;&lt;P&gt;getting below error Please help me!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;error:-&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;"traceback": [ [ "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hue/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/handlers/base.py", 112, "get_response", "response = wrapped_callback(request, *callback_args, **callback_kwargs)" ], [ "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hue/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/transaction.py", 371, "inner", "return func(*args, **kwargs)" ], [ "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hue/apps/oozie/src/oozie/decorators.py", 113, "decorate", "return view_func(request, *args, **kwargs)" ], [ "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hue/apps/oozie/src/oozie/decorators.py", 75, "decorate", "return view_func(request, *args, **kwargs)" ], [ "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hue/apps/oozie/src/oozie/views/editor2.py", 373, "submit_workflow", "return _submit_workflow_helper(request, workflow, submit_action=reverse('oozie:editor_submit_workflow', kwargs={'doc_id': workflow.id}))" ], [ "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hue/apps/oozie/src/oozie/views/editor2.py", 428, "_submit_workflow_helper", "'is_oozie_mail_enabled': _is_oozie_mail_enabled(request.user)" ], [ "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hue/apps/oozie/src/oozie/views/editor2.py", 435, "_is_oozie_mail_enabled", "oozie_conf = api.get_configuration()" ], [ "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hue/desktop/libs/liboozie/src/liboozie/oozie_api.py", 319, "get_configuration", "resp = self._root.get('admin/configuration', params)" ], [ "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hue/desktop/core/src/desktop/lib/rest/resource.py", 100, "get", "return self.invoke(\"GET\", relpath, params, headers=headers, allow_redirects=True, clear_cookies=clear_cookies)" ], [ "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hue/desktop/core/src/desktop/lib/rest/resource.py", 80, "invoke", "clear_cookies=clear_cookies)" ], [ "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hue/desktop/core/src/desktop/lib/rest/http_client.py", 196, "execute", "raise self._exc_class(ex)" ] ] }&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Thanks&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;HadoopHelp&lt;/STRONG&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 29 Mar 2019 13:07:33 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oozie-workflow-Spark-action-using-simple-Dataframe-quot/m-p/88468#M28322</guid>
      <dc:creator>HadoopHelp</dc:creator>
      <dc:date>2019-03-29T13:07:33Z</dc:date>
    </item>
    <item>
      <title>Re: Oozie workflow, Spark action (using simple Dataframe): "Table not found" error</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oozie-workflow-Spark-action-using-simple-Dataframe-quot/m-p/323907#M28323</link>
      <description>&lt;P&gt;Hi,&amp;nbsp;&lt;/P&gt;&lt;P&gt;I am having the same issue on CDP 7.1.6 with Oozie 5.1.0.&lt;/P&gt;&lt;P&gt;But the suggested solution does not seem to work anymore.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Setting&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;property&amp;gt;&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;name&amp;gt;oozie.launcher.yarn.app.mapreduce.am.env&amp;lt;/name&amp;gt;&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;value&amp;gt;SPARK_HOME=/opt/cloudera/parcels/CDH/lib/spark/&amp;lt;/value&amp;gt;&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;/property&amp;gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;has no effect.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Is there anything else I can do? Did the setting change?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 08 Sep 2021 10:20:43 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Oozie-workflow-Spark-action-using-simple-Dataframe-quot/m-p/323907#M28323</guid>
      <dc:creator>joha0123</dc:creator>
      <dc:date>2021-09-08T10:20:43Z</dc:date>
    </item>
  </channel>
</rss>

