Member since
07-29-2013
366
Posts
69
Kudos Received
71
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5086 | 03-09-2016 01:21 AM | |
4300 | 03-07-2016 01:52 AM | |
13538 | 02-29-2016 04:40 AM | |
4019 | 02-22-2016 03:08 PM | |
5017 | 01-19-2016 02:13 PM |
04-15-2014
09:26 AM
Great that solved my problem. i'll use the Spark dedicated forum from now on for Spark related questions thanks for pointing that out. Thanks for your help. Stefan
... View more
03-27-2014
03:58 AM
1 Kudo
I got the solution. I have referred https://cwiki.apache.org/confluence/display/Hive/HiveClient and made some changed. Here's catch. 1. We can use same JDBC url for connecting to Hive/Shark. you only need to change the port. What I did? 1. I run hive on port 4544 and used below JDBC url in Java class HiveJdbc.java Connection con = DriverManager.getConnection("jdbc:hive://localhost:4544/default", "", ""); 1. I run shark on port 4588 and used below JDBC url in Java class SharkJDBC.java Connection con = DriverManager.getConnection("jdbc:hive://localhost:4588/default", "", ""); Rest of code is same. Here's code. - ---------------------------------- ---------------------------------- ------------------------------------------------------------------- import java.sql.SQLException; import java.sql.Connection; import java.sql.ResultSet; import java.sql.Statement; import java.sql.DriverManager; public class SharkJdbcClient { private static String driverName = "org.apache.hadoop.hive.jdbc.HiveDriver"; /** * @param args * @throws SQLException */ public static void main(String[] args) throws SQLException { try { Class.forName(driverName); } catch (ClassNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); System.exit(1); } Connection con = DriverManager.getConnection("jdbc:hive://localhost:4588/default", "" , ""); Statement stmt = con.createStatement(); String tableName = "bank_tab1_cached"; System.out.println("Droppring the table : " + tableName); stmt.executeQuery("drop table " + tableName); ResultSet res = stmt.executeQuery("create table " + tableName+ " (empid int, name string) ROW FORMAT DELIMITED FIELDS TERMINATED BY " + "\",\""); // show tables String sql = "show tables '" + tableName + "'"; System.out.println("Running: " + sql); res = stmt.executeQuery(sql); if (res.next()) { System.out.println(res.getString(1)); } // describe table sql = "describe " + tableName; System.out.println("Running: " + sql); res = stmt.executeQuery(sql); while (res.next()) { System.out.println(res.getString(1) + "-------" + res.getString(2)); } // load data into table // NOTE: filepath has to be local to the hive server String filepath = "/home/abhi/Downloads/at_env_jar/emp_data.txt"; sql = "load data local inpath '" + filepath + "' into table " + tableName; System.out.println("Running: " + sql); res = stmt.executeQuery(sql); // select * query sql = "select * from " + tableName; System.out.println("Running: " + sql); res = stmt.executeQuery(sql); while (res.next()) { System.out.println(String.valueOf(res.getInt(1)) + "\t" + res.getString(2)); } // regular hive query sql = "select count(1) from " + tableName; System.out.println("Running: " + sql); res = stmt.executeQuery(sql); while (res.next()) { System.out.println(res.getString(1)); } String q1="CREATE TABLE one AS SELECT 1 AS one FROM " + tableName + " LIMIT 1"; int rows=0; String c1=""; String c2=""; //insert into table emp_tab1 SELECT stack(3 , 1 , "row1" , 2 , "row2", 3 , "row3") AS (empid, name)FROM one; System.out.println("Inserting records..... " ); String q2 = "insert into table " +tableName + " SELECT stack(3 , 1 ,\"row1\", 2 , \"row2\",3 , \"row3\") AS (empid, name) FROM one"; res = stmt.executeQuery(q2); System.out.println("Successfully inserted.......... " ); }} - ---------------------------------- ---------------------------------- ------------------------------------------------------------------- Here's at.sh script used for runnign the code. ---------------------------------- ---------------------------------- ------------------------------------------------------------------- #!/bin/bash HADOOP_HOME="/usr/lib/hadoop" HIVE_HOME="/home/abhi/Downloads/hive-0.9.0-bin" HADOOP_CORE="/home/abhi/Downloads/at_env_jar/Hadoop4.1.1/hadoop-core-0.20.203.0.jar" CLASSPATH=.:$HADOOP_HOME:$HADOOP_CORE:$HIVE_HOME:$HIVE_HOME/conf for i in ${HIVE_HOME}/lib/*.jar ; do CLASSPATH=$CLASSPATH:$i done java -cp $CLASSPATH HiveJdbcClient ---------------------------------- ---------------------------------- ------------------------------------------------------------------- Compile your Java code and run the at.sh (with execute permission). Cheers 🙂 Abhishek
... View more
03-11-2014
05:54 AM
1 Kudo
No they do not. The serving layer must be able to access HDFS. It takes its config by default from Hadoop config on the local machine at /etc/hadoop/conf. If that is hard I can say more about how to directly specify the HDFS URL. For this reason you can run these on many different machines.
... View more
03-05-2014
08:48 AM
Thanks, that actually makes sense as we have control over the model.generations.keep.
... View more
03-05-2014
06:20 AM
Yes, you are right. It's error about Eclipse integration with maven. And I change my dev tools to IntelliJ IDEA. Thank you for helping me that a Windows Phone and Windows Azure developer.
... View more
02-18-2014
09:53 AM
Hard filtering rules need to be implemented in a RescorerProvider, or in logic on the caller side. Tagging users and items with a locale could make sense. It would function as a soft filter nudging people towards things in the same locale. That could be useful as well, but is a different thing from implementing business rules. If your items and users are nearly completely disjoint by locale (e.g. very few items are available in multiple locales and very few users shop in multiple locales) then separate models might be the best way to go. No filtering logic needed although you then manage a model per locale. But the models are smaller and easier to handle. If there is moderate overlap, then a unified model can benefit from the cross-locale learning.
... View more
02-07-2014
01:32 PM
Nah, I've just heard this or a variant a few times now. Time to make a FAQ item... Thanks!
... View more
- « Previous
- Next »