<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: query hive tables with spark sql in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/query-hive-tables-with-spark-sql/m-p/145382#M107950</link>
    <description>&lt;P&gt;Hi &lt;A rel="user" href="https://community.cloudera.com/users/3093/oscaricardo4.html" nodeid="3093"&gt;@Jan J&lt;/A&gt; &lt;/P&gt;&lt;P&gt;If you have already some cluster with Hive tables  in it you don't need to create those tables with Spark once more. &lt;/P&gt;&lt;P&gt;You can just connect to existing. Please try next:&lt;/P&gt;&lt;P&gt;1. Pack your code in jar file and move somewhere to your cluster. Make Hive query calls from SparkSession.sql("YOUR_QUERY").&lt;/P&gt;&lt;P&gt;2. run spark-submit tool with 'driver-java-options' set to local metastore&lt;/P&gt;&lt;PRE&gt;--driver-java-options "-Dhive.metastore.uris=thrift://localhost:9083"&lt;/PRE&gt;&lt;P&gt;Best regards,&lt;/P&gt;&lt;P&gt;Olga&lt;/P&gt;</description>
    <pubDate>Thu, 23 Feb 2017 00:58:43 GMT</pubDate>
    <dc:creator>olastytsyuk</dc:creator>
    <dc:date>2017-02-23T00:58:43Z</dc:date>
  </channel>
</rss>

