Member since
01-19-2018
5
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1779 | 01-19-2018 10:06 AM |
09-05-2018
05:54 AM
Hi Josh Elser, Thanks for your response, yes it looks like the same issue as reported in PHOENIX-4489.
... View more
01-19-2018
10:06 AM
Here how I solved this problem, sharing it so that if someone else also face this issue then it will be helpful for him I am loading the two datasets separately and then registering them as temp table and now I am able to run the join query using those two tables. Below is the sample code.
String table1 = "TABLE_1";
Map<String, String> map = new HashMap<>();
map.put("zkUrl", ZOOKEEPER_URL);
map.put("table", table1);
Dataset<Row> df = sparkSession.sqlContext().load("org.apache.phoenix.spark", map);
df.registerTempTable(tableName);
String table2 = "TABLE_2";
map = new HashMap<>();
map.put("zkUrl", ZOOKEEPER_URL);
map.put("table", table2);
Dataset<Row> df2 = sparkSession.sqlContext().load("org.apache.phoenix.spark", map);
df2.registerTempTable(table2);
Dataset<Row> selectResult = df.sparkSession().sql(" SELECT * FROM TABLE_1 as A JOIN TABLE_2 as B ON A.COLUMN_1 = B.COLUMN_2 WHERE B.COLUMN_2 = 'XYZ' ");
... View more