<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Access spark temporary table via JDBC in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Access-spark-temporary-table-via-JDBC/m-p/66330#M56221</link>
    <description>&lt;P&gt;I am using spark&amp;nbsp;2.0.2. Can you help me with build.sbt&amp;nbsp;file.&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Fri, 13 Apr 2018 23:04:22 GMT</pubDate>
    <dc:creator>LAzyDBA</dc:creator>
    <dc:date>2018-04-13T23:04:22Z</dc:date>
    <item>
      <title>Access spark temporary table via JDBC</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Access-spark-temporary-table-via-JDBC/m-p/51828#M56219</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;I have found a general template how to access spark temporary data (id data frame) via an external tool using JDBC. What I have found that it should be quite simple:&lt;/P&gt;&lt;P&gt;1. Run spark-shell or submit spark job&lt;/P&gt;&lt;P&gt;2. Configure HiveContext and then run HiveThirftServer from the job.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;In a separate session access the thrift server via beeline and query data.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Here is my code Spark 2.1:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;import org.apache.spark.sql.hive.HiveContext
import org.apache.spark.sql.hive.thriftserver.HiveThriftServer2
import org.apache.spark.sql.hive.thriftserver._

val sql = new HiveContext(sc)
sql.setConf("hive.server2.thrift.port", "10002")
sql.setConf("hive.server2.authentication","KERBEROS" )
sql.setConf("hive.server2.authentication.kerberos.principal","hive/host1.lab.hadoop.net@LAB.HADOOP.NET" )
sql.setConf("hive.server2.authentication.kerberos.keytab","/home/h.keytab" )
sql.setConf("spark.sql.hive.thriftServer.singleSession","true")
val data = sql.sql("select 112 as id")
data.collect
data.createOrReplaceTempView("yyy")
sql.sql("show tables").show

HiveThriftServer2.startWithContext(sql)     
 WARN metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
 WARN metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException     &lt;/PRE&gt;&lt;P&gt;Connect to the JDBC server:&lt;/P&gt;&lt;PRE&gt;beeline -u "jdbc:hive2://localhost:10002/default;principal=hive/host1.lab.hadoop.net@LAB.HADOOP.NET"&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;However when I try to launch the HiveThriftServer2 I can access the spark thrift but do not see the temporary table. Command "show tables" do not show any temporary table. Trying to query "yyy" throws an error:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;scala&amp;gt; sql.sql("show tables").collect
res11: Array[org.apache.spark.sql.Row] = Array([,sometablename,true], [,yyy,true])

scala&amp;gt; 17/03/06 11:15:50 ERROR thriftserver.SparkExecuteStatementOperation: Error executing query, currentState RUNNING,
org.apache.spark.sql.AnalysisException: Table or view not found: yyy; line 1 pos 14
        at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
        at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.org$apache$spark$sql$catalyst$analysis$Analyzer$ResolveRelations$$lookupTableFromCatalog(Analyzer.scala:459)
        at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:478)
        at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:463)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:61)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:61)&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If I create a table from beeline via "create table t as select 100 as id" the table is created and I can see it in spark-shell (data stored locally in spark-warehouse directory) So the other direction is working.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;So the question what I am missing, why I can't see the temporary table?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 11:11:59 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Access-spark-temporary-table-via-JDBC/m-p/51828#M56219</guid>
      <dc:creator>Tomas79</dc:creator>
      <dc:date>2022-09-16T11:11:59Z</dc:date>
    </item>
    <item>
      <title>Re: Access spark temporary table via JDBC</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Access-spark-temporary-table-via-JDBC/m-p/51906#M56220</link>
      <description>&lt;P&gt;I have found out what was the problem. The solution is to set the singleSession property to true in command line. Because setting it in a program seems to not work properly.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;/bin/spark-shell --conf spark.sql.hive.thriftServer.singleSession=true&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;WORKS.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;/bin/spark-shell &lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;...&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;sql.setConf("spark.sql.hive.thriftServer.singleSession","true")&lt;/P&gt;&lt;P&gt;...&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;DOES NOT WORK&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 08 Mar 2017 11:52:53 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Access-spark-temporary-table-via-JDBC/m-p/51906#M56220</guid>
      <dc:creator>Tomas79</dc:creator>
      <dc:date>2017-03-08T11:52:53Z</dc:date>
    </item>
    <item>
      <title>Re: Access spark temporary table via JDBC</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Access-spark-temporary-table-via-JDBC/m-p/66330#M56221</link>
      <description>&lt;P&gt;I am using spark&amp;nbsp;2.0.2. Can you help me with build.sbt&amp;nbsp;file.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 13 Apr 2018 23:04:22 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Access-spark-temporary-table-via-JDBC/m-p/66330#M56221</guid>
      <dc:creator>LAzyDBA</dc:creator>
      <dc:date>2018-04-13T23:04:22Z</dc:date>
    </item>
  </channel>
</rss>

