Member since
11-18-2015
9
Posts
7
Kudos Received
0
Solutions
04-11-2016
04:25 PM
Thank you Ben
... View more
04-11-2016
03:55 PM
Thank you very much, Benjamin
... View more
04-07-2016
05:01 PM
1 Kudo
I am using a query like the
following to try to populate it. Note the “limit” statement I have
included in the inner query. Without this, I get the familiar “Size of
hash cache…exceeds the maximum” error that the USE_SORT_MERGE_JOIN hint is
supposed to fix. (Note also that I am using that hint.) However, I
need to join to all of the 2,393,244 records in the RPT_LAB_RSLT_PRE2 table to
get my full data set. So the limit condition simply demonstrates that my
query is good, as with the limit to 500 records, this statement completes
successfully and quickly. Without it, it fails as noted above. Query with Limit condition: (This query works) UPSERT
INTO MED_DOC_DM.RPT_LAB_RSLT_PRE3
SELECT
/*+ USE_SORT_MERGE_JOIN */
rslt.ORD_PROC_ID,
rslt.ORD_DT_REAL_NBR,
rslt.ORD_RSLT_LN_NBR,
rslt.PTNT_ENCNTR_CSN_ID,
rslt.PTNT_ID,
op2.ORD_PROC_DESCR,
rslt.SRVC_AREA_ID,
rslt.CMPNT_ID,
NULL AS COMPONENT_NM,
rslt.RSLT_TSP,
rslt.ORD_VAL_MEAS_TXT,
rslt.REF_UNIT_TXT,
NULL AS CBC_IND,
NULL AS HGB_IND,
NULL AS PLT_IND,
NULL AS WBC_IND,
TO_CHAR(CURRENT_TIME())
FROM
MED_DOC_DM.RPT_LAB_RSLT_PRE1 rslt
JOIN (
SELECT op.ORD_PROC_ID, op.ORD_PROC_DESCR
FROM MED_DOC_DM.RPT_LAB_RSLT_PRE2 op
LIMIT 500 )
op2
ON rslt.ORD_PROC_ID = op2.ORD_PROC_ID;
The below query doesn't have the LIMIT condition:
0:
jdbc:phoenix:> UPSERT INTO
MED_DOC_DM.RPT_LAB_RSLT_PRE3
. . . . . . . . . . . . . . . .
. . . . . . .> SELECT /*+ USE_SORT_MERGE_JOIN */
. . . . . . . . . . . . . . . .
. . . . . . .> rslt.ORD_PROC_ID,
. . . . . . . . . . . . . . . .
. . . . . . .>
rslt.ORD_DT_REAL_NBR,
. . . . . . . . . . . . . . . .
. . . . . . .>
rslt.ORD_RSLT_LN_NBR,
. . . . . . . . . . . . . . . .
. . . . . . .>
rslt.PTNT_ENCNTR_CSN_ID,
. . . . . . . . . . . . . . . .
. . . . . . .> rslt.PTNT_ID,
. . . . . . . . . . . . . . . .
. . . . . . .> op.ORD_PROC_DESCR,
. . . . . . . . . . . . . . . .
. . . . . . .> rslt.SRVC_AREA_ID,
. . . . . . . . . . . . . . . .
. . . . . . .> rslt.CMPNT_ID,
. . . . . . . . . . . . . . . .
. . . . . . .> NULL AS
COMPONENT_NM,
. . . . . . . . . . . . . . . .
. . . . . . .> rslt.RSLT_TSP,
. . . . . . . . . . . . . . . .
. . . . . . .>
rslt.ORD_VAL_MEAS_TXT,
. . . . . . . . . . . . . . . .
. . . . . . .> rslt.REF_UNIT_TXT,
. . . . . . . . . . . . . . . .
. . . . . . .> NULL AS CBC_IND,
. . . . . . . . . . . . . . . .
. . . . . . .> NULL AS HGB_IND,
. . . . . . . . . . . . . . . .
. . . . . . .> NULL AS PLT_IND,
. . . . . . . . . . . . . . . .
. . . . . . .> NULL AS WBC_IND,
. . . . . . . . . . . . . . . .
. . . . . . .>
TO_CHAR(CURRENT_TIME())
. . . . . . . . . . . . . . . .
. . . . . . .> FROM MED_DOC_DM.RPT_LAB_RSLT_PRE1 rslt
. . . . . . . . . . . . . . . .
. . . . . . .> JOIN MED_DOC_DM.RPT_LAB_RSLT_PRE2 op ON rslt.ORD_PROC_ID =
op.ORD_PROC_ID;
Error: Encountered exception in
sub plan [0] execution. (state=,code=0)
java.sql.SQLException:
Encountered exception in sub plan [0] execution.
at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:164)
at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:119)
at org.apache.phoenix.compile.UpsertCompiler$2.execute(UpsertCompiler.java:670)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:304)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:296)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:294)
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1254)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
Caused by:
org.apache.phoenix.join.MaxServerCacheSizeExceededException: Size of hash cache
(104857641 bytes) exceeds the maximum allowed size (104857600 bytes)
at org.apache.phoenix.join.HashCacheClient.serialize(HashCacheClient.java:109)
at
org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:82)
at
org.apache.phoenix.execute.HashJoinPlan$HashSubPlan.execute(HashJoinPlan.java:346)
at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:143)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:172)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
... View more
Labels:
- Labels:
-
Apache Phoenix
01-12-2016
07:25 PM
3 Kudos
You can use beeline to connect from an edge-node server to hiveserver2. Below is an example:
beeline -u "jdbc:hive2://127.0.0.1:10000/default;principal=hive/sandbox.hortonworks.com@EXAMPLE.COM;auth-kerberos" -n <user> They key part of this example is the JDBC URL that has to be provided for Kerberos authentication to work correctly. Note the main sections of the JDBC URL.
jdbc:hive2://127.0.0.1:10000/default
principal=hive/sandbox.hortonworks.com@EXAMPLE.COM;
auth=kerberos The first part is a standard JDBC URL that provides information about the driver (hive2), the hostname (127.0.0.1), the port number (10000), and the default database (default). The second part is special to Kerberos. It tells you what service principal is used to authenticate to this URL.
And the final step is to tell JDBC that you definitely want to do Kerberos authentication (auth=kerberos) You'll also note that the commandline for beeline included a specification that I wanted to connect with a specific username (-n <user> ). This is required so that beeline knows what specific kerberos TGT to look for. All of this assumes that when you login to the edge node server, you followed standard protocol to get a kerberos TGT. (The profile is setup so that you're automatically prompted again for your password. This establishes your TGT.)
... View more