<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Phoenix - Query - Error Size of hash cache (104857641 bytes) exceeds the maximum allowed size (104857600 bytes) in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Phoenix-Query-Error-Size-of-hash-cache-104857641-bytes/m-p/161133#M24652</link>
    <description>&lt;P&gt;Thank you so much, Leonhardi!  You rock!&lt;/P&gt;&lt;P&gt;I moved the hint to follow the word "UPSERT" as you recommended, and I can now see the SORT-MERGE reflected in the EXPLAIN PLAN, and the query runs successfully and quickly.  Thank you for taking the time to reply and share your wisdom with us!&lt;/P&gt;&lt;P&gt;(I would give you points, but I apparently have no reputation yet and none to give.  I will try to remember to come back in and take care of that once I am somebody on this community forum.  :&amp;gt;)&lt;/P&gt;</description>
    <pubDate>Fri, 08 Apr 2016 23:40:57 GMT</pubDate>
    <dc:creator>thomas_forsyth</dc:creator>
    <dc:date>2016-04-08T23:40:57Z</dc:date>
    <item>
      <title>Phoenix - Query - Error Size of hash cache (104857641 bytes) exceeds the maximum allowed size (104857600 bytes)</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Phoenix-Query-Error-Size-of-hash-cache-104857641-bytes/m-p/161129#M24648</link>
      <description>&lt;P&gt;I am using a query like the
following to try to populate it.  Note the “limit” statement I have
included in the inner query.  Without this, I get the familiar “Size of
hash cache…exceeds the maximum” error that the USE_SORT_MERGE_JOIN hint is
supposed to fix.  (Note also that I am using that hint.)  However, I
need to join to all of the 2,393,244 records in the RPT_LAB_RSLT_PRE2 table to
get my full data set.  So the limit condition simply demonstrates that my
query is good, as with the limit to 500 records, this statement completes
successfully and quickly.  Without it, it fails as noted above.&lt;/P&gt;&lt;P&gt;Query with Limit condition: (This query works)&lt;/P&gt;&lt;PRE&gt;UPSERT
INTO MED_DOC_DM.RPT_LAB_RSLT_PRE3
SELECT
/*+ USE_SORT_MERGE_JOIN */
 
rslt.ORD_PROC_ID,
 
rslt.ORD_DT_REAL_NBR,
 
rslt.ORD_RSLT_LN_NBR,
 
rslt.PTNT_ENCNTR_CSN_ID,
 
rslt.PTNT_ID,
 
op2.ORD_PROC_DESCR,
 
rslt.SRVC_AREA_ID,
 
rslt.CMPNT_ID,
 
NULL AS COMPONENT_NM,
 
rslt.RSLT_TSP,
 
rslt.ORD_VAL_MEAS_TXT,
 
rslt.REF_UNIT_TXT,
 
NULL AS CBC_IND,
 
NULL AS HGB_IND,
 
 NULL AS PLT_IND,
 
NULL AS WBC_IND,
 
TO_CHAR(CURRENT_TIME())
FROM
MED_DOC_DM.RPT_LAB_RSLT_PRE1 rslt
JOIN (
SELECT op.ORD_PROC_ID, op.ORD_PROC_DESCR
 
FROM MED_DOC_DM.RPT_LAB_RSLT_PRE2 op
&lt;STRONG&gt;LIMIT 500&lt;/STRONG&gt; )
op2
 
ON rslt.ORD_PROC_ID = op2.ORD_PROC_ID;
The below query doesn't have the LIMIT condition:
0:
jdbc:phoenix:&amp;gt; UPSERT INTO
MED_DOC_DM.RPT_LAB_RSLT_PRE3
. . . . . . . . . . . . . . . .
. . . . . . .&amp;gt; SELECT /*+ USE_SORT_MERGE_JOIN */
. . . . . . . . . . . . . . . .
. . . . . . .&amp;gt;  rslt.ORD_PROC_ID,
. . . . . . . . . . . . . . . .
. . . . . . .&amp;gt; 
rslt.ORD_DT_REAL_NBR,
. . . . . . . . . . . . . . . .
. . . . . . .&amp;gt; 
rslt.ORD_RSLT_LN_NBR,
. . . . . . . . . . . . . . . .
. . . . . . .&amp;gt; 
rslt.PTNT_ENCNTR_CSN_ID,
. . . . . . . . . . . . . . . .
. . . . . . .&amp;gt;  rslt.PTNT_ID,
. . . . . . . . . . . . . . . .
. . . . . . .&amp;gt;  op.ORD_PROC_DESCR,
. . . . . . . . . . . . . . . .
. . . . . . .&amp;gt;  rslt.SRVC_AREA_ID,
. . . . . . . . . . . . . . . .
. . . . . . .&amp;gt;  rslt.CMPNT_ID,
. . . . . . . . . . . . . . . .
. . . . . . .&amp;gt;  NULL AS
COMPONENT_NM,
. . . . . . . . . . . . . . . .
. . . . . . .&amp;gt;  rslt.RSLT_TSP,
. . . . . . . . . . . . . . . .
. . . . . . .&amp;gt; 
rslt.ORD_VAL_MEAS_TXT,
. . . . . . . . . . . . . . . .
. . . . . . .&amp;gt;  rslt.REF_UNIT_TXT,
. . . . . . . . . . . . . . . .
. . . . . . .&amp;gt;  NULL AS CBC_IND,
. . . . . . . . . . . . . . . .
. . . . . . .&amp;gt;  NULL AS HGB_IND,
. . . . . . . . . . . . . . . .
. . . . . . .&amp;gt;  NULL AS PLT_IND,
. . . . . . . . . . . . . . . .
. . . . . . .&amp;gt;  NULL AS WBC_IND,
. . . . . . . . . . . . . . . .
. . . . . . .&amp;gt; 
TO_CHAR(CURRENT_TIME())
. . . . . . . . . . . . . . . .
. . . . . . .&amp;gt; FROM MED_DOC_DM.RPT_LAB_RSLT_PRE1 rslt
. . . . . . . . . . . . . . . .
. . . . . . .&amp;gt; JOIN MED_DOC_DM.RPT_LAB_RSLT_PRE2 op ON rslt.ORD_PROC_ID =
op.ORD_PROC_ID;
Error: Encountered exception in
sub plan [0] execution. (state=,code=0)
java.sql.SQLException:
Encountered exception in sub plan [0] execution.
 
at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:164)
 
at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:119)
 
at org.apache.phoenix.compile.UpsertCompiler$2.execute(UpsertCompiler.java:670)
 
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:304)
 
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:296)
 
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
 
at
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:294)
 
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1254)
 
at sqlline.Commands.execute(Commands.java:822)
 
at sqlline.Commands.sql(Commands.java:732)
 
at sqlline.SqlLine.dispatch(SqlLine.java:808)
 
at sqlline.SqlLine.begin(SqlLine.java:681)
 
at sqlline.SqlLine.start(SqlLine.java:398)
 
at sqlline.SqlLine.main(SqlLine.java:292)
Caused by:
org.apache.phoenix.join.MaxServerCacheSizeExceededException: Size of hash cache
(104857641 bytes) exceeds the maximum allowed size (104857600 bytes)
 
at org.apache.phoenix.join.HashCacheClient.serialize(HashCacheClient.java:109)
 
at
org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:82)
 
at
org.apache.phoenix.execute.HashJoinPlan$HashSubPlan.execute(HashJoinPlan.java:346)
 
at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:143)
 
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 
at
org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:172)
 
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 
at java.lang.Thread.run(Thread.java:744)&lt;/PRE&gt;</description>
      <pubDate>Fri, 08 Apr 2016 00:01:36 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Phoenix-Query-Error-Size-of-hash-cache-104857641-bytes/m-p/161129#M24648</guid>
      <dc:creator>King_King</dc:creator>
      <dc:date>2016-04-08T00:01:36Z</dc:date>
    </item>
    <item>
      <title>Re: Phoenix - Query - Error Size of hash cache (104857641 bytes) exceeds the maximum allowed size (104857600 bytes)</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Phoenix-Query-Error-Size-of-hash-cache-104857641-bytes/m-p/161130#M24649</link>
      <description>&lt;P&gt;Here is some additional information about the issue Maharaj and I (Tom Forsyth) are sharing.&lt;/P&gt;&lt;P&gt;The table structures involved in the this failing query are provided below.  Here are the counts of records in each table.&lt;/P&gt;&lt;P&gt;RPT_LAB_RSLT_PRE1 = 19,729,760&lt;/P&gt;&lt;P&gt;RPT_LAB_RSLT_PRE2 = 2,393,244&lt;/P&gt;&lt;P&gt;RPT_LAB_RSLT_PRE3 = 0 (count prior to upsert attempt)&lt;/P&gt;&lt;P&gt;-- This table is the receiver of the upsert record set.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;CREATE
TABLE MED_DOC_DM.RPT_LAB_RSLT_PRE3&lt;/P&gt;&lt;P&gt;(&lt;/P&gt;&lt;P&gt; 
ORD_PROC_ID 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
ORD_DT_REAL_NBR 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
ORD_RSLT_LN_NBR 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
PTNT_ENCNTR_CSN_ID 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
PTNT_ID 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
ORD_PROC_DESCR 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
SRVC_AREA_ID 
 VARCHAR,&lt;/P&gt;&lt;P&gt; 
CMPNT_ID 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
CMPNT_NM 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
RSLT_TSP 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
ORD_VAL_MEAS_TXT 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
REF_UNIT_TXT 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
CBC_IND 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
HGB_IND 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
PLT_IND 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
WBC_IND 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
ROW_INSERT_TSP 
VARCHAR&lt;/P&gt;&lt;P&gt; 
CONSTRAINT PK_RPT_LAB_RSLT_PRE3 PRIMARY KEY( ORD_PROC_ID, ORD_DT_REAL_NBR,
ORD_RSLT_LN_NBR ) )&lt;/P&gt;&lt;P&gt;IMMUTABLE_ROWS=true,
COMPRESSION='SNAPPY', SALT_BUCKETS = 12;&lt;/P&gt;&lt;P&gt;-- This is the first source table being queried, providing the bulk of the data being populated to the receiver table.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;CREATE
TABLE MED_DOC_DM.RPT_LAB_RSLT_PRE1&lt;/P&gt;&lt;P&gt;(&lt;/P&gt;&lt;P&gt; 
ORD_PROC_ID 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
ORD_DT_REAL_NBR
 VARCHAR,&lt;/P&gt;&lt;P&gt; 
ORD_RSLT_LN_NBR 
VARCHAR,&lt;/P&gt;&lt;P&gt;  PTNT_ENCNTR_CSN_ID 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
PTNT_ID 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
SRVC_AREA_ID 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
CMPNT_ID 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
RSLT_TSP 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
ORD_VAL_MEAS_TXT 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
REF_UNIT_TXT 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
ROW_INSERT_TSP 
VARCHAR&lt;/P&gt;&lt;P&gt; 
CONSTRAINT PK_RPT_LAB_RSLT_PRE1 PRIMARY KEY( ORD_PROC_ID, ORD_DT_REAL_NBR,
ORD_RSLT_LN_NBR ) )&lt;/P&gt;&lt;P&gt;IMMUTABLE_ROWS=true,
COMPRESSION='SNAPPY', SALT_BUCKETS = 12;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;-- This table is being joined in as the 2nd of the source tables to effectively attach the procedure description to each&lt;/P&gt;&lt;P&gt;-- record from the 1st source table.&lt;/P&gt;&lt;P&gt;CREATE TABLE MED_DOC_DM.RPT_LAB_RSLT_PRE2&lt;/P&gt;&lt;P&gt;(&lt;/P&gt;&lt;P&gt; ORD_PROC_ID  VARCHAR,&lt;/P&gt;&lt;P&gt; ORD_PROC_DESCR  VARCHAR,&lt;/P&gt;&lt;P&gt; ROW_INSERT_TSP  VARCHAR&lt;/P&gt;&lt;P&gt; CONSTRAINT PK_RPT_LAB_RSLT_PRE2 PRIMARY KEY( ORD_PROC_ID ) )&lt;/P&gt;&lt;P&gt;IMMUTABLE_ROWS=true, COMPRESSION='SNAPPY', SALT_BUCKETS = 12;&lt;/P&gt;</description>
      <pubDate>Fri, 08 Apr 2016 02:15:13 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Phoenix-Query-Error-Size-of-hash-cache-104857641-bytes/m-p/161130#M24649</guid>
      <dc:creator>thomas_forsyth</dc:creator>
      <dc:date>2016-04-08T02:15:13Z</dc:date>
    </item>
    <item>
      <title>Re: Phoenix - Query - Error Size of hash cache (104857641 bytes) exceeds the maximum allowed size (104857600 bytes)</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Phoenix-Query-Error-Size-of-hash-cache-104857641-bytes/m-p/161131#M24650</link>
      <description>&lt;P&gt;I assume if you run explain that he shows both times that he ignores the hint. I found this link where SquirrelJDBC was removing hints but it looks like you use sqlline and the link said it works there and the usage looks identical to what you do. ( Could you try it once without the upsert to see if that helps? ) &lt;/P&gt;&lt;P&gt;&lt;A href="https://mail-archives.apache.org/mod_mbox/phoenix-user/201503.mbox/%3cfc15b78a902a1ad5c0d66efc4a5b2342@mail.gmail.com%3e" target="_blank"&gt;https://mail-archives.apache.org/mod_mbox/phoenix-user/201503.mbox/%3cfc15b78a902a1ad5c0d66efc4a5b2342@mail.gmail.com%3e&lt;/A&gt;&lt;/P&gt;&lt;P&gt;The second possibility would obviously be to increase the hash cache? 100MB is not that much in todays day and age and 3m rows is not the world. &lt;/P&gt;&lt;P&gt;phoenix.query.maxServerCacheBytes&lt;/P&gt;&lt;P&gt;I hope he is smart enough to only build a cache from the two columns of the right side he actually needs. But I assume since its a description column, that is actually bigger. 200 bytes * 3m rows would be 600m of data.&lt;/P&gt;</description>
      <pubDate>Fri, 08 Apr 2016 04:04:36 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Phoenix-Query-Error-Size-of-hash-cache-104857641-bytes/m-p/161131#M24650</guid>
      <dc:creator>bleonhardi</dc:creator>
      <dc:date>2016-04-08T04:04:36Z</dc:date>
    </item>
    <item>
      <title>Re: Phoenix - Query - Error Size of hash cache (104857641 bytes) exceeds the maximum allowed size (104857600 bytes)</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Phoenix-Query-Error-Size-of-hash-cache-104857641-bytes/m-p/161132#M24651</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/698/maharajmuthusamy.html" nodeid="698"&gt;@Maharaj Muthusamy&lt;/A&gt; &lt;/P&gt;&lt;P&gt;You need to set the hint after the first statement. I.e. the upsert statement not the select one so it works. Just tried it &lt;/P&gt;&lt;P&gt;This works and results in a sort merge join:&lt;/P&gt;&lt;PRE&gt;explain upsert /*+ USE_SORT_MERGE_JOIN */ into productsales select productsales.product, productsales.date, productsales.amount  from sales,productsales where sales.product = productsales.product;&lt;/PRE&gt;&lt;P&gt;This doesn't:&lt;/P&gt;&lt;PRE&gt;explain upsert  into productsales select /*+ USE_SORT_MERGE_JOIN */ productsales.product, productsales.date, productsales.amount  from sales,productsales where sales.product = productsales.product;&lt;/PRE&gt;</description>
      <pubDate>Fri, 08 Apr 2016 19:29:00 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Phoenix-Query-Error-Size-of-hash-cache-104857641-bytes/m-p/161132#M24651</guid>
      <dc:creator>bleonhardi</dc:creator>
      <dc:date>2016-04-08T19:29:00Z</dc:date>
    </item>
    <item>
      <title>Re: Phoenix - Query - Error Size of hash cache (104857641 bytes) exceeds the maximum allowed size (104857600 bytes)</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Phoenix-Query-Error-Size-of-hash-cache-104857641-bytes/m-p/161133#M24652</link>
      <description>&lt;P&gt;Thank you so much, Leonhardi!  You rock!&lt;/P&gt;&lt;P&gt;I moved the hint to follow the word "UPSERT" as you recommended, and I can now see the SORT-MERGE reflected in the EXPLAIN PLAN, and the query runs successfully and quickly.  Thank you for taking the time to reply and share your wisdom with us!&lt;/P&gt;&lt;P&gt;(I would give you points, but I apparently have no reputation yet and none to give.  I will try to remember to come back in and take care of that once I am somebody on this community forum.  :&amp;gt;)&lt;/P&gt;</description>
      <pubDate>Fri, 08 Apr 2016 23:40:57 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Phoenix-Query-Error-Size-of-hash-cache-104857641-bytes/m-p/161133#M24652</guid>
      <dc:creator>thomas_forsyth</dc:creator>
      <dc:date>2016-04-08T23:40:57Z</dc:date>
    </item>
    <item>
      <title>Re: Phoenix - Query - Error Size of hash cache (104857641 bytes) exceeds the maximum allowed size (104857600 bytes)</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Phoenix-Query-Error-Size-of-hash-cache-104857641-bytes/m-p/161134#M24653</link>
      <description>&lt;P&gt;haha no worries. Thanks for the flowers. Yeah its a bit weird apparently Phoenix takes hints only globally for the whole query ( not for a sub select ) and looks  for them after the first main action word. &lt;/P&gt;</description>
      <pubDate>Sat, 09 Apr 2016 01:36:22 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Phoenix-Query-Error-Size-of-hash-cache-104857641-bytes/m-p/161134#M24653</guid>
      <dc:creator>bleonhardi</dc:creator>
      <dc:date>2016-04-09T01:36:22Z</dc:date>
    </item>
    <item>
      <title>Re: Phoenix - Query - Error Size of hash cache (104857641 bytes) exceeds the maximum allowed size (104857600 bytes)</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Phoenix-Query-Error-Size-of-hash-cache-104857641-bytes/m-p/161135#M24654</link>
      <description>&lt;P&gt;Hi &lt;A rel="user" href="https://community.cloudera.com/users/4319/thomasforsyth.html" nodeid="4319"&gt;@Thomas Forsyth&lt;/A&gt;, welcome to HCC! You can give Ben points by accepting his answer. There is an "Accept" link below each answer. But in this case only &lt;A rel="user" href="https://community.cloudera.com/users/698/maharajmuthusamy.html" nodeid="698"&gt;@Maharaj Muthusamy&lt;/A&gt; who asked the question can do it.&lt;/P&gt;</description>
      <pubDate>Sun, 10 Apr 2016 14:29:45 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Phoenix-Query-Error-Size-of-hash-cache-104857641-bytes/m-p/161135#M24654</guid>
      <dc:creator>pminovic</dc:creator>
      <dc:date>2016-04-10T14:29:45Z</dc:date>
    </item>
    <item>
      <title>Re: Phoenix - Query - Error Size of hash cache (104857641 bytes) exceeds the maximum allowed size (104857600 bytes)</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Phoenix-Query-Error-Size-of-hash-cache-104857641-bytes/m-p/161136#M24655</link>
      <description>&lt;P&gt;Okay.  Thank you, Predrag.  I will work that out with Maharaj.&lt;/P&gt;</description>
      <pubDate>Mon, 11 Apr 2016 20:41:16 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Phoenix-Query-Error-Size-of-hash-cache-104857641-bytes/m-p/161136#M24655</guid>
      <dc:creator>thomas_forsyth</dc:creator>
      <dc:date>2016-04-11T20:41:16Z</dc:date>
    </item>
    <item>
      <title>Re: Phoenix - Query - Error Size of hash cache (104857641 bytes) exceeds the maximum allowed size (104857600 bytes)</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Phoenix-Query-Error-Size-of-hash-cache-104857641-bytes/m-p/161137#M24656</link>
      <description>&lt;P&gt;Thank you very much, Benjamin&lt;/P&gt;</description>
      <pubDate>Mon, 11 Apr 2016 22:55:03 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Phoenix-Query-Error-Size-of-hash-cache-104857641-bytes/m-p/161137#M24656</guid>
      <dc:creator>King_King</dc:creator>
      <dc:date>2016-04-11T22:55:03Z</dc:date>
    </item>
    <item>
      <title>Re: Phoenix - Query - Error Size of hash cache (104857641 bytes) exceeds the maximum allowed size (104857600 bytes)</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Phoenix-Query-Error-Size-of-hash-cache-104857641-bytes/m-p/161138#M24657</link>
      <description>&lt;P&gt;Thank you Ben&lt;/P&gt;</description>
      <pubDate>Mon, 11 Apr 2016 23:25:50 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Phoenix-Query-Error-Size-of-hash-cache-104857641-bytes/m-p/161138#M24657</guid>
      <dc:creator>King_King</dc:creator>
      <dc:date>2016-04-11T23:25:50Z</dc:date>
    </item>
    <item>
      <title>Re: Phoenix - Query - Error Size of hash cache (104857641 bytes) exceeds the maximum allowed size (104857600 bytes)</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Phoenix-Query-Error-Size-of-hash-cache-104857641-bytes/m-p/161139#M24658</link>
      <description>&lt;P&gt;Here is some additional information about the issue Maharaj and I (Tom Forsyth) are sharing.&lt;/P&gt;&lt;P&gt;The table structures involved in the this failing query are provided below.  Here are the counts of records in each table.&lt;/P&gt;&lt;P&gt;RPT_LAB_RSLT_PRE1 = 19,729,760&lt;/P&gt;&lt;P&gt;RPT_LAB_RSLT_PRE2 = 2,393,244&lt;/P&gt;&lt;P&gt;RPT_LAB_RSLT_PRE3 = 0 (count prior to upsert attempt)&lt;/P&gt;&lt;P&gt;Maharaj mentions a "limit" condition, but the query provided by him with the log results does not include that.  Here is the version of the query that we tried which does run successfully with the limit condition.  Ultimately, we need the query Maharaj has provided to run without issue, as we do not want to limit the number of procedure descriptions joined into the data set.  The limited version of the query was just used to confirm that the query syntax is sound, and that the problem appears to be caused by trying to join two source tables having large record volumes.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;UPSERT
INTO MED_DOC_DM.RPT_LAB_RSLT_PRE3&lt;/P&gt;&lt;P&gt;SELECT
/*+ USE_SORT_MERGE_JOIN */&lt;/P&gt;&lt;P&gt; 
rslt.ORD_PROC_ID,&lt;/P&gt;&lt;P&gt; 
rslt.ORD_DT_REAL_NBR,&lt;/P&gt;&lt;P&gt; 
rslt.ORD_RSLT_LN_NBR,&lt;/P&gt;&lt;P&gt; 
rslt.PTNT_ENCNTR_CSN_ID,&lt;/P&gt;&lt;P&gt; 
rslt.PTNT_ID,&lt;/P&gt;&lt;P&gt; 
op2.ORD_PROC_DESCR,&lt;/P&gt;&lt;P&gt; 
rslt.SRVC_AREA_ID,&lt;/P&gt;&lt;P&gt; 
rslt.CMPNT_ID,&lt;/P&gt;&lt;P&gt; 
NULL AS COMPONENT_NM,&lt;/P&gt;&lt;P&gt; 
rslt.RSLT_TSP,&lt;/P&gt;&lt;P&gt; 
rslt.ORD_VAL_MEAS_TXT,&lt;/P&gt;&lt;P&gt; 
rslt.REF_UNIT_TXT,&lt;/P&gt;&lt;P&gt; 
NULL AS CBC_IND,&lt;/P&gt;&lt;P&gt; 
NULL AS HGB_IND,&lt;/P&gt;&lt;P&gt; 
 NULL AS PLT_IND,&lt;/P&gt;&lt;P&gt; 
NULL AS WBC_IND,&lt;/P&gt;&lt;P&gt; 
TO_CHAR(CURRENT_TIME())&lt;/P&gt;&lt;P&gt;FROM
MED_DOC_DM.RPT_LAB_RSLT_PRE1 rslt&lt;/P&gt;&lt;P&gt;JOIN (
SELECT op.ORD_PROC_ID, op.ORD_PROC_DESCR&lt;/P&gt;&lt;P&gt; 
FROM MED_DOC_DM.RPT_LAB_RSLT_PRE2 op&lt;/P&gt;&lt;P&gt; 
&lt;STRONG&gt;LIMIT 500&lt;/STRONG&gt; )
op2&lt;/P&gt;&lt;P&gt; 
ON rslt.ORD_PROC_ID = op2.ORD_PROC_ID;&lt;/P&gt;&lt;P&gt;-- This table is the receiver of the upsert record set.&lt;/P&gt;&lt;P&gt;CREATE
TABLE MED_DOC_DM.RPT_LAB_RSLT_PRE3&lt;/P&gt;&lt;P&gt;(&lt;/P&gt;&lt;P&gt; 
ORD_PROC_ID 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
ORD_DT_REAL_NBR 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
ORD_RSLT_LN_NBR 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
PTNT_ENCNTR_CSN_ID 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
PTNT_ID 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
ORD_PROC_DESCR 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
SRVC_AREA_ID 
 VARCHAR,&lt;/P&gt;&lt;P&gt; 
CMPNT_ID 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
CMPNT_NM 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
RSLT_TSP 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
ORD_VAL_MEAS_TXT 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
REF_UNIT_TXT 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
CBC_IND 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
HGB_IND 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
PLT_IND 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
WBC_IND 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
ROW_INSERT_TSP 
VARCHAR&lt;/P&gt;&lt;P&gt; 
CONSTRAINT PK_RPT_LAB_RSLT_PRE3 PRIMARY KEY( ORD_PROC_ID, ORD_DT_REAL_NBR,
ORD_RSLT_LN_NBR ) )&lt;/P&gt;&lt;P&gt;IMMUTABLE_ROWS=true,
COMPRESSION='SNAPPY', SALT_BUCKETS = 12;&lt;/P&gt;&lt;P&gt;-- This is the first source table being queried, providing the bulk of the data being populated to the receiver table.&lt;/P&gt;&lt;P&gt;CREATE
TABLE MED_DOC_DM.RPT_LAB_RSLT_PRE1&lt;/P&gt;&lt;P&gt;(&lt;/P&gt;&lt;P&gt; 
ORD_PROC_ID 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
ORD_DT_REAL_NBR
 VARCHAR,&lt;/P&gt;&lt;P&gt; 
ORD_RSLT_LN_NBR 
VARCHAR,&lt;/P&gt;&lt;P&gt;  PTNT_ENCNTR_CSN_ID 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
PTNT_ID 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
SRVC_AREA_ID 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
CMPNT_ID 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
RSLT_TSP 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
ORD_VAL_MEAS_TXT 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
REF_UNIT_TXT 
VARCHAR,&lt;/P&gt;&lt;P&gt; 
ROW_INSERT_TSP 
VARCHAR&lt;/P&gt;&lt;P&gt; 
CONSTRAINT PK_RPT_LAB_RSLT_PRE1 PRIMARY KEY( ORD_PROC_ID, ORD_DT_REAL_NBR,
ORD_RSLT_LN_NBR ) )&lt;/P&gt;&lt;P&gt;IMMUTABLE_ROWS=true,
COMPRESSION='SNAPPY', SALT_BUCKETS = 12;&lt;/P&gt;&lt;P&gt;-- This table is being joined in as the 2nd of the source tables to effectively attach the procedure description to each&lt;/P&gt;&lt;P&gt;-- record from the 1st source table.&lt;/P&gt;&lt;P&gt;CREATE TABLE MED_DOC_DM.RPT_LAB_RSLT_PRE2&lt;/P&gt;&lt;P&gt;(&lt;/P&gt;&lt;P&gt; ORD_PROC_ID  VARCHAR,&lt;/P&gt;&lt;P&gt; ORD_PROC_DESCR  VARCHAR,&lt;/P&gt;&lt;P&gt; ROW_INSERT_TSP  VARCHAR&lt;/P&gt;&lt;P&gt; CONSTRAINT PK_RPT_LAB_RSLT_PRE2 PRIMARY KEY( ORD_PROC_ID ) )&lt;/P&gt;&lt;P&gt;IMMUTABLE_ROWS=true, COMPRESSION='SNAPPY', SALT_BUCKETS = 12;&lt;/P&gt;</description>
      <pubDate>Sat, 23 Apr 2016 23:08:03 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Phoenix-Query-Error-Size-of-hash-cache-104857641-bytes/m-p/161139#M24658</guid>
      <dc:creator>thomas_forsyth</dc:creator>
      <dc:date>2016-04-23T23:08:03Z</dc:date>
    </item>
    <item>
      <title>Re: Phoenix - Query - Error Size of hash cache (104857641 bytes) exceeds the maximum allowed size (104857600 bytes)</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Phoenix-Query-Error-Size-of-hash-cache-104857641-bytes/m-p/161140#M24659</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/168/bleonhardi.html" nodeid="168" target="_blank"&gt;@Benjamin Leonhardi&lt;/A&gt; could you please tell me how to actually increase the hash cache? I agree 100MB is not much.&lt;/P&gt;&lt;P&gt;I'm running an HDP 2.6.5 cluster with HBase 1.1.2 and Phoenix enabled. I couldn't find the property phoenix.query.maxServerCacheBytes defined in hbase-site.xml config file, so I tried adding it in the "Custom hbase-site" section in Ambari and setting it to 500Mb as shown in the picture below:&lt;BR /&gt;&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="107791-1554834728566.png" style="width: 1546px;"&gt;&lt;img src="https://community.cloudera.com/t5/image/serverpage/image-id/20691iEA2FD49D34F2472B/image-size/medium?v=v2&amp;amp;px=400" role="button" title="107791-1554834728566.png" alt="107791-1554834728566.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;Now I can see the variable defined in hbase-site.xml, but I am still getting the following error "...maximum allowed size (104857600 bytes)" even though I have changed that setting to 500MB).&lt;/P&gt;&lt;P&gt;Any ideas or suggestions?&lt;/P&gt;&lt;P&gt;Thanks in advance!&lt;/P&gt;</description>
      <pubDate>Sun, 18 Aug 2019 11:55:42 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Phoenix-Query-Error-Size-of-hash-cache-104857641-bytes/m-p/161140#M24659</guid>
      <dc:creator>lautaroc</dc:creator>
      <dc:date>2019-08-18T11:55:42Z</dc:date>
    </item>
  </channel>
</rss>

