<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Error while connecting to hive from Spark using jdbc connection string in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Error-while-connecting-to-hive-from-Spark-using-jdbc/m-p/151315#M113799</link>
    <description>&lt;P&gt;I don't know if this applies to your situation, but there is a Java issue that will cause that type of exception with Kerberos 1.8.1 or higher. If your error is related to this, for a renewable ticket run  kinit -R after initially running
kinit. There are some additional suggestions at &lt;A href="https://community.hortonworks.com/articles/4755/common-kerberos-errors-and-solutions.html" target="_blank"&gt;https://community.hortonworks.com/articles/4755/common-kerberos-errors-and-solutions.html&lt;/A&gt;.&lt;/P&gt;</description>
    <pubDate>Thu, 15 Sep 2016 05:59:27 GMT</pubDate>
    <dc:creator>lgeorge</dc:creator>
    <dc:date>2016-09-15T05:59:27Z</dc:date>
    <item>
      <title>Error while connecting to hive from Spark using jdbc connection string</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Error-while-connecting-to-hive-from-Spark-using-jdbc/m-p/151312#M113796</link>
      <description>&lt;P&gt;I am trying to connect to Hive from Spark inside the map function like below&lt;/P&gt;&lt;P&gt;String driver = "org.apache.hive.jdbc.HiveDriver";
Class.forName(driver);
Connection con_con1 = DriverManager
.getConnection(
"jdbc:hive2://server1.net:10001/default;principal=hive/server1.net@abc.xyz.NET;ssl=false;transportMode=http;httpPath=cliservice" , "username", "password");&lt;/P&gt;&lt;P&gt;But i am  getting error.&lt;/P&gt;&lt;P&gt;javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]&lt;/P&gt;</description>
      <pubDate>Wed, 14 Sep 2016 15:08:43 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Error-while-connecting-to-hive-from-Spark-using-jdbc/m-p/151312#M113796</guid>
      <dc:creator>pooja_khandelwa</dc:creator>
      <dc:date>2016-09-14T15:08:43Z</dc:date>
    </item>
    <item>
      <title>Re: Error while connecting to hive from Spark using jdbc connection string</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Error-while-connecting-to-hive-from-Spark-using-jdbc/m-p/151313#M113797</link>
      <description>&lt;P&gt;Looks like the credential issue. Check the credentials entered are correct.&lt;/P&gt;</description>
      <pubDate>Wed, 14 Sep 2016 15:19:18 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Error-while-connecting-to-hive-from-Spark-using-jdbc/m-p/151313#M113797</guid>
      <dc:creator>nitinshk77</dc:creator>
      <dc:date>2016-09-14T15:19:18Z</dc:date>
    </item>
    <item>
      <title>Re: Error while connecting to hive from Spark using jdbc connection string</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Error-while-connecting-to-hive-from-Spark-using-jdbc/m-p/151314#M113798</link>
      <description>&lt;P&gt;The credentials are correct. I am using the same connection string from edge node and it works fine, but not from the spark program.&lt;/P&gt;</description>
      <pubDate>Wed, 14 Sep 2016 15:27:32 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Error-while-connecting-to-hive-from-Spark-using-jdbc/m-p/151314#M113798</guid>
      <dc:creator>pooja_khandelwa</dc:creator>
      <dc:date>2016-09-14T15:27:32Z</dc:date>
    </item>
    <item>
      <title>Re: Error while connecting to hive from Spark using jdbc connection string</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Error-while-connecting-to-hive-from-Spark-using-jdbc/m-p/151315#M113799</link>
      <description>&lt;P&gt;I don't know if this applies to your situation, but there is a Java issue that will cause that type of exception with Kerberos 1.8.1 or higher. If your error is related to this, for a renewable ticket run  kinit -R after initially running
kinit. There are some additional suggestions at &lt;A href="https://community.hortonworks.com/articles/4755/common-kerberos-errors-and-solutions.html" target="_blank"&gt;https://community.hortonworks.com/articles/4755/common-kerberos-errors-and-solutions.html&lt;/A&gt;.&lt;/P&gt;</description>
      <pubDate>Thu, 15 Sep 2016 05:59:27 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Error-while-connecting-to-hive-from-Spark-using-jdbc/m-p/151315#M113799</guid>
      <dc:creator>lgeorge</dc:creator>
      <dc:date>2016-09-15T05:59:27Z</dc:date>
    </item>
    <item>
      <title>Re: Error while connecting to hive from Spark using jdbc connection string</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Error-while-connecting-to-hive-from-Spark-using-jdbc/m-p/151316#M113800</link>
      <description>&lt;P&gt;Your cluster is kerberized and this issue looks like kerberos ticket issue.&lt;/P&gt;&lt;P&gt;you dont have a valid kerberos ticket. to run this command&lt;/P&gt;&lt;P&gt;first get a valid ticket for your user to run this commad.&lt;/P&gt;&lt;P&gt;To get a valid ticket type&lt;/P&gt;&lt;P&gt;#kinit&lt;/P&gt;&lt;P&gt;then it will ask for kerberos password.&lt;/P&gt;&lt;P&gt;after you enter password&lt;/P&gt;&lt;P&gt;#klist&lt;/P&gt;&lt;P&gt;it will display kerberos ticket for your user.&lt;/P&gt;&lt;P&gt;now try running command to connect to hive.&lt;/P&gt;</description>
      <pubDate>Thu, 15 Sep 2016 15:09:32 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Error-while-connecting-to-hive-from-Spark-using-jdbc/m-p/151316#M113800</guid>
      <dc:creator>balay80</dc:creator>
      <dc:date>2016-09-15T15:09:32Z</dc:date>
    </item>
    <item>
      <title>Re: Error while connecting to hive from Spark using jdbc connection string</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Error-while-connecting-to-hive-from-Spark-using-jdbc/m-p/151317#M113801</link>
      <description>&lt;P&gt;Ok I have resolved the kerberos ticket issue by using below line in my java code.&lt;/P&gt;&lt;P&gt;System.setProperty("java.security.auth.login.config","gss-jaas.conf");&lt;/P&gt;&lt;P&gt;
      System.setProperty("sun.security.jgss.debug","true");&lt;/P&gt;&lt;P&gt;
      System.setProperty("javax.security.auth.useSubjectCredsOnly","false"); &lt;/P&gt;&lt;P&gt;      System.setProperty("java.security.krb5.conf","krb5.conf");
&lt;/P&gt;&lt;P&gt;But I am getting different error now. &lt;/P&gt;&lt;P&gt;WARN scheduler.TaskSetManager: Lost task 1.0 in stage 1.0 (TID 7, SGSCAI0068.inedc.corpintra.net): java.sql.SQLException: Could not open connection to jdbc:hive2://**********.inedc.corpintra.net:10001/default;principal=hive/******.inedc.corpintra.net@*****;transportMode=http;httpPath=cliservice: null
at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:206)
at org.apache.hive.jdbc.HiveConnection.&amp;lt;init&amp;gt;(HiveConnection.java:178)
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:233)
at GetDataFromXMLStax.returnXmlTagMatchStatus(GetDataFromXMLStax.java:77)
at SP_LogFileNotifier_FileParsingAndCreateEmailContent$2.call(SP_LogFileNotifier_FileParsingAndCreateEmailContent.java:200)
at SP_LogFileNotifier_FileParsingAndCreateEmailContent$2.call(SP_LogFileNotifier_FileParsingAndCreateEmailContent.java:1)
at org.apache.spark.api.java.JavaPairRDD$anonfun$toScalaFunction$1.apply(JavaPairRDD.scala:1027)
at scala.collection.Iterator$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$anon$11.next(Iterator.scala:328)
at org.apache.spark.rdd.PairRDDFunctions$anonfun$saveAsHadoopDataset$1$anonfun$13$anonfun$apply$6.apply$mcV$sp(PairRDDFunctions.scala:1109)
at org.apache.spark.rdd.PairRDDFunctions$anonfun$saveAsHadoopDataset$1$anonfun$13$anonfun$apply$6.apply(PairRDDFunctions.scala:1108)
at org.apache.spark.rdd.PairRDDFunctions$anonfun$saveAsHadoopDataset$1$anonfun$13$anonfun$apply$6.apply(PairRDDFunctions.scala:1108)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1285)
at org.apache.spark.rdd.PairRDDFunctions$anonfun$saveAsHadoopDataset$1$anonfun$13.apply(PairRDDFunctions.scala:1116)
at org.apache.spark.rdd.PairRDDFunctions$anonfun$saveAsHadoopDataset$1$anonfun$13.apply(PairRDDFunctions.scala:1095)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:258)
at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:203)
... 22 more&lt;/P&gt;&lt;P&gt;Seems the connection string is not correct. I am calling this connection string from inside of a spark program.&lt;/P&gt;&lt;P&gt;Connection con_con1 = DriverManager
.getConnection(
"jdbc:hive2://**********.inedc.corpintra.net:10001/default;principal=hive/**********.inedc.corpintra.net@*****.NET;transportMode=http;httpPath=cliservice");&lt;/P&gt;&lt;P&gt;Please help me with the correct connection string.&lt;/P&gt;</description>
      <pubDate>Thu, 15 Sep 2016 15:19:20 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Error-while-connecting-to-hive-from-Spark-using-jdbc/m-p/151317#M113801</guid>
      <dc:creator>pooja_khandelwa</dc:creator>
      <dc:date>2016-09-15T15:19:20Z</dc:date>
    </item>
    <item>
      <title>Re: Error while connecting to hive from Spark using jdbc connection string</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Error-while-connecting-to-hive-from-Spark-using-jdbc/m-p/151318#M113802</link>
      <description>&lt;P&gt;Why connect to hive via JDBC?    Use SparkSQL with the HiveContext and have full access to all the HiveTables.   This is optimized in spark and very fast.&lt;/P&gt;&lt;P&gt;&lt;A href="https://spark.apache.org/docs/1.6.0/sql-programming-guide.html" target="_blank"&gt;https://spark.apache.org/docs/1.6.0/sql-programming-guide.html&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 15 Sep 2016 19:18:02 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Error-while-connecting-to-hive-from-Spark-using-jdbc/m-p/151318#M113802</guid>
      <dc:creator>TimothySpann</dc:creator>
      <dc:date>2016-09-15T19:18:02Z</dc:date>
    </item>
    <item>
      <title>Re: Error while connecting to hive from Spark using jdbc connection string</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Error-while-connecting-to-hive-from-Spark-using-jdbc/m-p/151319#M113803</link>
      <description>&lt;P&gt;Yes I agree but I am trying to run hive query inside the map method. While using hive context, I am getting error that class is not serializable.&lt;/P&gt;</description>
      <pubDate>Thu, 15 Sep 2016 20:49:04 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Error-while-connecting-to-hive-from-Spark-using-jdbc/m-p/151319#M113803</guid>
      <dc:creator>pooja_khandelwa</dc:creator>
      <dc:date>2016-09-15T20:49:04Z</dc:date>
    </item>
  </channel>
</rss>

