<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Shark connectivity using Java and JDBC/other driver ? in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Shark-connectivity-using-Java-and-JDBC-other-driver/m-p/7798#M1401</link>
    <description>&lt;P&gt;It should be possible to use the Hive client to access Shark.&amp;nbsp;&lt;A target="_blank" href="https://cwiki.apache.org/confluence/display/Hive/HiveClient"&gt;https://cwiki.apache.org/confluence/display/Hive/HiveClient&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have not tried it myself, so maybe others can weigh in with better info, like which version of Hive is used with 0.9. I think it is Hive 0.11, from looking at the build.&lt;/P&gt;</description>
    <pubDate>Tue, 25 Mar 2014 09:22:42 GMT</pubDate>
    <dc:creator>srowen</dc:creator>
    <dc:date>2014-03-25T09:22:42Z</dc:date>
    <item>
      <title>Shark connectivity using Java and JDBC/other driver ?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Shark-connectivity-using-Java-and-JDBC-other-driver/m-p/7796#M1400</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;I&amp;nbsp; want to run Shark query using Jave&amp;nbsp; code.&lt;/P&gt;&lt;P&gt;Does any one knows how&amp;nbsp;we can connect to Shark using java and JDBC/other driver ?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;&lt;P&gt;Abhishek&lt;/P&gt;</description>
      <pubDate>Tue, 25 Mar 2014 09:05:56 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Shark-connectivity-using-Java-and-JDBC-other-driver/m-p/7796#M1400</guid>
      <dc:creator>abhietc31</dc:creator>
      <dc:date>2014-03-25T09:05:56Z</dc:date>
    </item>
    <item>
      <title>Re: Shark connectivity using Java and JDBC/other driver ?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Shark-connectivity-using-Java-and-JDBC-other-driver/m-p/7798#M1401</link>
      <description>&lt;P&gt;It should be possible to use the Hive client to access Shark.&amp;nbsp;&lt;A target="_blank" href="https://cwiki.apache.org/confluence/display/Hive/HiveClient"&gt;https://cwiki.apache.org/confluence/display/Hive/HiveClient&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have not tried it myself, so maybe others can weigh in with better info, like which version of Hive is used with 0.9. I think it is Hive 0.11, from looking at the build.&lt;/P&gt;</description>
      <pubDate>Tue, 25 Mar 2014 09:22:42 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Shark-connectivity-using-Java-and-JDBC-other-driver/m-p/7798#M1401</guid>
      <dc:creator>srowen</dc:creator>
      <dc:date>2014-03-25T09:22:42Z</dc:date>
    </item>
    <item>
      <title>Re: Shark connectivity using Java and JDBC/other driver ?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Shark-connectivity-using-Java-and-JDBC-other-driver/m-p/7800#M1402</link>
      <description>&lt;P&gt;Thanks for reply &lt;A target="_self" href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/133"&gt;&lt;SPAN&gt;srowen&lt;/SPAN&gt;&lt;/A&gt;.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;U&gt;&lt;STRONG&gt;Here's&lt;/STRONG&gt;&amp;nbsp;&lt;STRONG&gt; setup Details&lt;/STRONG&gt;&lt;/U&gt;&lt;/P&gt;&lt;P&gt;hive-0.11.0-bin&lt;/P&gt;&lt;P&gt;shark-0.8.1-bin-cdh4&lt;/P&gt;&lt;P&gt;Spark spark-0.9.0-incubating-bin-cdh4.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Shark&lt;/STRONG&gt; is&amp;nbsp; throwing &lt;STRONG&gt;error&lt;/STRONG&gt; as mentioned below while using &lt;STRONG&gt;Hive0.11. &lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Many user commented that Hive is having two varience &lt;STRONG&gt;Un-Patched&lt;/STRONG&gt; and &lt;STRONG&gt;Patched&lt;/STRONG&gt;.&lt;/P&gt;&lt;P&gt;I'm not sure where to get&amp;nbsp; &lt;STRONG&gt;Patched Hive &lt;/STRONG&gt;and &lt;STRONG&gt;UnPatched Hive &lt;/STRONG&gt;versions.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Shark env.sh&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;#!/usr/bin/env bash&lt;BR /&gt;# (Required) Amount of memory used per slave node. This should be in the same&lt;BR /&gt;# format as the JVM's -Xmx option, e.g. 300m or 1g.&lt;BR /&gt;export SPARK_MEM=200m&lt;BR /&gt;export SPARK_HOME="/home/training/AT_Installation/spark-0.9.0-incubating-bin-cdh4"&lt;BR /&gt;# (Required) Set the master program's memory&lt;BR /&gt;export SHARK_MASTER_MEM=200m&lt;/P&gt;&lt;P&gt;# (Required) Point to your Scala installation.&lt;BR /&gt;export SCALA_HOME="/home/training/AT_Installation/scala-2.9.3"&lt;/P&gt;&lt;P&gt;# (Required) Point to the patched Hive binary distribution&lt;BR /&gt;export HIVE_HOME="/home/training/AT_Installation/hive-0.11.0-bin"&lt;BR /&gt;export HADOOP_HOME="/usr/lib/hadoop"&lt;BR /&gt;# (Optional) Specify the location of Hive's configuration directory. By default,&lt;BR /&gt;# it points to $HIVE_HOME/conf&lt;BR /&gt;#export HIVE_CONF_DIR="$HIVE_HOME/conf"&lt;/P&gt;&lt;P&gt;# For running Shark in distributed mode, set the following:&lt;BR /&gt;#export HADOOP_HOME=""&lt;BR /&gt;#export SPARK_HOME=""&lt;BR /&gt;#export MASTER=""&lt;BR /&gt;# Only required if using Mesos:&lt;BR /&gt;#export MESOS_NATIVE_LIBRARY=/usr/local/lib/libmesos.so&lt;/P&gt;&lt;P&gt;# Only required if run shark with spark on yarn&lt;BR /&gt;#export SHARK_EXEC_MODE=yarn&lt;BR /&gt;#export SPARK_ASSEMBLY_JAR=&lt;BR /&gt;#export SHARK_ASSEMBLY_JAR=&lt;/P&gt;&lt;P&gt;# (Optional) Extra classpath&lt;BR /&gt;#export SPARK_LIBRARY_PATH=""&lt;/P&gt;&lt;P&gt;# Java options&lt;BR /&gt;# On EC2, change the local.dir to /mnt/tmp&lt;BR /&gt;SPARK_JAVA_OPTS="-Dspark.local.dir=/tmp "&lt;BR /&gt;SPARK_JAVA_OPTS+="-Dspark.kryoserializer.buffer.mb=10 "&lt;BR /&gt;SPARK_JAVA_OPTS+="-verbose:gc -XX:-PrintGCDetails -XX:+PrintGCTimeStamps "&lt;BR /&gt;export SPARK_JAVA_OPTS&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Error&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;14/03/25 06:23:23 INFO HiveMetaStore.audit: ugi=training&amp;nbsp;ip=unknown-ip-addr&amp;nbsp;cmd=get_tables: db=default pat=.*&amp;nbsp;&lt;BR /&gt;Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.hive.cli.CliDriver.getCommandCompletor()Ljline/Completor;&lt;BR /&gt;&amp;nbsp;at shark.SharkCliDriver$.main(SharkCliDriver.scala:184)&lt;BR /&gt;&amp;nbsp;at shark.SharkCliDriver.main(SharkCliDriver.scala)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;thanks,&lt;/P&gt;&lt;P&gt;Abhishek&lt;/P&gt;</description>
      <pubDate>Tue, 25 Mar 2014 10:27:54 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Shark-connectivity-using-Java-and-JDBC-other-driver/m-p/7800#M1402</guid>
      <dc:creator>abhietc31</dc:creator>
      <dc:date>2014-03-25T10:27:54Z</dc:date>
    </item>
    <item>
      <title>Re: Shark connectivity using Java and JDBC/other driver ?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Shark-connectivity-using-Java-and-JDBC-other-driver/m-p/7802#M1403</link>
      <description>&lt;P&gt;The error indicates that mismatching versions of Hive are being used. Not sure that helps. I am not familiar with Shark as a user myself.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If I'm not wrong, this isn't specific to the Cloudera distribution of Spark, so you may get better answers asking on the general Shark list.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 25 Mar 2014 10:34:15 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Shark-connectivity-using-Java-and-JDBC-other-driver/m-p/7802#M1403</guid>
      <dc:creator>srowen</dc:creator>
      <dc:date>2014-03-25T10:34:15Z</dc:date>
    </item>
    <item>
      <title>Re: Shark connectivity using Java and JDBC/other driver ?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Shark-connectivity-using-Java-and-JDBC-other-driver/m-p/7944#M1404</link>
      <description>&lt;P&gt;I got the solution.&lt;/P&gt;&lt;P&gt;I have referred &lt;A target="_blank" href="https://cwiki.apache.org/confluence/display/Hive/HiveClient"&gt;https://cwiki.apache.org/confluence/display/Hive/HiveClient&lt;/A&gt;&amp;nbsp;and made some changed.&lt;/P&gt;&lt;P&gt;Here's catch.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;1.&amp;nbsp; We can use same&amp;nbsp; JDBC url for connecting to Hive/Shark.&amp;nbsp; you only need to change the port.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;What I did?&lt;/P&gt;&lt;P&gt;1. I run hive on port 4544&amp;nbsp; and used below JDBC url in Java class HiveJdbc.java&lt;/P&gt;&lt;P&gt;Connection con = DriverManager.&lt;EM&gt;getConnection&lt;/EM&gt;(&lt;FONT color="#2a00ff" size="1"&gt;&lt;FONT color="#2a00ff" size="1"&gt;"jdbc:hive://localhost&lt;STRONG&gt;:4544&lt;/STRONG&gt;/default"&lt;/FONT&gt;&lt;/FONT&gt;&lt;FONT size="1"&gt;, &lt;/FONT&gt;&lt;FONT color="#2a00ff" size="1"&gt;&lt;FONT color="#2a00ff" size="1"&gt;""&lt;/FONT&gt;&lt;/FONT&gt;&lt;FONT size="1"&gt;, &lt;/FONT&gt;&lt;FONT color="#2a00ff" size="1"&gt;&lt;FONT color="#2a00ff" size="1"&gt;""&lt;/FONT&gt;&lt;/FONT&gt;&lt;FONT size="1"&gt;); &lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;1. I run shark on port 4588&amp;nbsp; and used below JDBC url in Java class SharkJDBC.java&lt;/P&gt;&lt;P&gt;Connection con = DriverManager.&lt;EM&gt;getConnection&lt;/EM&gt;(&lt;FONT color="#2a00ff" size="1"&gt;&lt;FONT color="#2a00ff" size="1"&gt;"jdbc:hive://localhost&lt;STRONG&gt;:4588&lt;/STRONG&gt;/default"&lt;/FONT&gt;&lt;/FONT&gt;&lt;FONT size="1"&gt;, &lt;/FONT&gt;&lt;FONT color="#2a00ff" size="1"&gt;&lt;FONT color="#2a00ff" size="1"&gt;""&lt;/FONT&gt;&lt;/FONT&gt;&lt;FONT size="1"&gt;, &lt;/FONT&gt;&lt;FONT color="#2a00ff" size="1"&gt;&lt;FONT color="#2a00ff" size="1"&gt;""&lt;/FONT&gt;&lt;/FONT&gt;&lt;FONT size="1"&gt;); &lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT size="1"&gt;Rest of code is same.&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT size="1"&gt;Here's code.&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;-&amp;nbsp;----------------------------------&amp;nbsp;----------------------------------&amp;nbsp;-------------------------------------------------------------------&lt;/P&gt;&lt;P&gt;import java.sql.SQLException;&lt;BR /&gt;import java.sql.Connection;&lt;BR /&gt;import java.sql.ResultSet;&lt;BR /&gt;import java.sql.Statement;&lt;BR /&gt;import java.sql.DriverManager;&lt;/P&gt;&lt;P&gt;public class SharkJdbcClient {&lt;BR /&gt;private static String driverName = "org.apache.hadoop.hive.jdbc.HiveDriver";&amp;nbsp;&amp;nbsp;&lt;BR /&gt;/** * @param args * @throws SQLException&amp;nbsp;&amp;nbsp; */&lt;BR /&gt;public static void main(String[] args) throws SQLException&lt;BR /&gt;{&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;BR /&gt;try&lt;BR /&gt;{&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;BR /&gt;Class.forName(driverName);&lt;BR /&gt;} catch (ClassNotFoundException e) {&lt;BR /&gt;// TODO Auto-generated catch block&lt;BR /&gt;e.printStackTrace();&lt;BR /&gt;System.exit(1);&lt;BR /&gt;}&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Connection con = DriverManager.getConnection("jdbc:hive://localhost:4588/default", "" , "");&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Statement stmt = con.createStatement();&amp;nbsp;&lt;BR /&gt;String tableName = "bank_tab1_cached";&lt;BR /&gt;System.out.println("Droppring the table : " + tableName);&lt;BR /&gt;stmt.executeQuery("drop table " + tableName);&lt;/P&gt;&lt;P&gt;ResultSet res = stmt.executeQuery("create table " + tableName+ " (empid int, name string) ROW FORMAT DELIMITED FIELDS TERMINATED BY " +&amp;nbsp;&amp;nbsp; "\",\"");&amp;nbsp;&amp;nbsp;&lt;BR /&gt;// show tables&amp;nbsp;&lt;BR /&gt;String sql = "show tables '" + tableName + "'";&amp;nbsp;&lt;BR /&gt;System.out.println("Running: " + sql);&lt;BR /&gt;res = stmt.executeQuery(sql);&lt;BR /&gt;if (res.next())&lt;BR /&gt;{&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; System.out.println(res.getString(1));&amp;nbsp;&amp;nbsp;&amp;nbsp; }&amp;nbsp;&amp;nbsp;&lt;BR /&gt;// describe table&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;BR /&gt;sql = "describe " + tableName;&amp;nbsp;&lt;BR /&gt;System.out.println("Running: " + sql);&lt;BR /&gt;res = stmt.executeQuery(sql);&lt;/P&gt;&lt;P&gt;while (res.next()) {&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;BR /&gt;&amp;nbsp;System.out.println(res.getString(1) + "-------" + res.getString(2));&amp;nbsp;&amp;nbsp;&amp;nbsp; }&lt;BR /&gt;// load data into table&amp;nbsp;&amp;nbsp;&lt;BR /&gt;// NOTE: filepath has to be local to the hive server&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;BR /&gt;&amp;nbsp;&lt;BR /&gt;String filepath = "/home/abhi/Downloads/at_env_jar/emp_data.txt";&lt;BR /&gt;sql = "load data local inpath '" + filepath + "' into table " + tableName;&amp;nbsp;&lt;BR /&gt;System.out.println("Running: " + sql);&amp;nbsp;&amp;nbsp;&lt;BR /&gt;res = stmt.executeQuery(sql);&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;BR /&gt;// select * query&lt;BR /&gt;sql = "select * from " + tableName;&lt;BR /&gt;System.out.println("Running: " + sql);&lt;BR /&gt;res = stmt.executeQuery(sql);&amp;nbsp;&lt;BR /&gt;while (res.next()) {&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; System.out.println(String.valueOf(res.getInt(1)) + "\t" + res.getString(2));&amp;nbsp;&amp;nbsp;&amp;nbsp; }&lt;BR /&gt;// regular hive query&amp;nbsp;&amp;nbsp;&lt;BR /&gt;sql = "select count(1) from " + tableName;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;BR /&gt;System.out.println("Running: " + sql);&lt;BR /&gt;res = stmt.executeQuery(sql);&lt;BR /&gt;while (res.next()) {&amp;nbsp;&amp;nbsp;&lt;BR /&gt;&amp;nbsp;System.out.println(res.getString(1));&lt;BR /&gt;}&amp;nbsp;&lt;BR /&gt;String q1="CREATE TABLE one AS SELECT 1 AS one FROM " +&amp;nbsp; tableName&amp;nbsp; + " LIMIT 1";&lt;/P&gt;&lt;P&gt;int rows=0;&lt;BR /&gt;String c1="";&lt;BR /&gt;String c2="";&lt;BR /&gt;//insert into table emp_tab1 SELECT stack(3 , 1 , "row1" , 2 , "row2", 3 , "row3") AS (empid, name)FROM one;&lt;BR /&gt;System.out.println("Inserting records..... " );&lt;BR /&gt;String q2 = "insert into table "&amp;nbsp; +tableName +&amp;nbsp; " SELECT stack(3 , 1 ,\"row1\", 2 , \"row2\",3 , \"row3\") AS (empid, name) FROM one";&lt;BR /&gt;res = stmt.executeQuery(q2);&lt;BR /&gt;System.out.println("Successfully inserted.......... " );&lt;/P&gt;&lt;P&gt;}}&lt;/P&gt;&lt;P&gt;&amp;nbsp;-&amp;nbsp;----------------------------------&amp;nbsp;----------------------------------&amp;nbsp;-------------------------------------------------------------------&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Here's at.sh script used for runnign the code.&lt;/P&gt;&lt;P&gt;----------------------------------&amp;nbsp;----------------------------------&amp;nbsp;-------------------------------------------------------------------&lt;/P&gt;&lt;P&gt;#!/bin/bash&lt;BR /&gt;HADOOP_HOME="/usr/lib/hadoop"&lt;BR /&gt;HIVE_HOME="/home/abhi/Downloads/hive-0.9.0-bin"&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&lt;BR /&gt;HADOOP_CORE="/home/abhi/Downloads/at_env_jar/Hadoop4.1.1/hadoop-core-0.20.203.0.jar"&lt;BR /&gt;CLASSPATH=.:$HADOOP_HOME:$HADOOP_CORE:$HIVE_HOME:$HIVE_HOME/conf&lt;BR /&gt;&amp;nbsp;&lt;BR /&gt;for i in ${HIVE_HOME}/lib/*.jar ; do&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; CLASSPATH=$CLASSPATH:$i&lt;BR /&gt;done&lt;BR /&gt;&amp;nbsp;&lt;BR /&gt;java -cp $CLASSPATH HiveJdbcClient&lt;/P&gt;&lt;P&gt;&amp;nbsp;----------------------------------&amp;nbsp;----------------------------------&amp;nbsp;-------------------------------------------------------------------&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;Compile your Java code and run the at.sh (with execute permission).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT size="1"&gt;Cheers &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT size="1"&gt;Abhishek&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 27 Mar 2014 10:58:13 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Shark-connectivity-using-Java-and-JDBC-other-driver/m-p/7944#M1404</guid>
      <dc:creator>abhietc31</dc:creator>
      <dc:date>2014-03-27T10:58:13Z</dc:date>
    </item>
  </channel>
</rss>

