Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Phoenix and hbase timeouts or lack of resources

avatar
Contributor

I am using small test cluster of hdp 2.3 with 3 datenodes having 2 x 2GHz, and 6GB of ram each on cloud.

while executing query using phoenix. I got ,

================================================================================

Error: Encountered exception in sub plan [0] execution. (state=,code=0) java.sql.SQLException: Encountered exception in sub plan [0] execution. at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:164) at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:119) at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:253) at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:243) at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:242) at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1258) at sqlline.Commands.execute(Commands.java:822) at sqlline.Commands.sql(Commands.java:732) at sqlline.SqlLine.dispatch(SqlLine.java:808) at sqlline.SqlLine.begin(SqlLine.java:681) at sqlline.SqlLine.start(SqlLine.java:398) at sqlline.SqlLine.main(SqlLine.java:292) Caused by: java.sql.SQLException: java.util.concurrent.TimeoutException at org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:247) at org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:83) at org.apache.phoenix.execute.HashJoinPlan$HashSubPlan.execute(HashJoinPlan.java:346) at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:143) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:172) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.util.concurrent.TimeoutException at java.util.concurrent.FutureTask.get(FutureTask.java:205) at org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:239) ... 8 more Closing: org.apache.phoenix.jdbc.PhoenixConnection 16/02/18 01:55:58 WARN ipc.CoprocessorRpcChannel: Call failed on IOException java.io.InterruptedIOException: Origin: InterruptedException at org.apache.hadoop.hbase.util.ExceptionUtil.asInterrupt(ExceptionUtil.java:65) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:322) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1622) at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:92) at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:89) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126) at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:95) at org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callMethod(CoprocessorRpcChannel.java:56) at org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService$Stub.addServerCache(ServerCachingProtos.java:3270) at org.apache.phoenix.cache.ServerCacheClient$1$1.call(ServerCacheClient.java:204) at org.apache.phoenix.cache.ServerCacheClient$1$1.call(ServerCacheClient.java:189) at org.apache.hadoop.hbase.client.HTable$16.call(HTable.java:1751) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.InterruptedException at java.lang.Object.wait(Native Method) at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1215) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.execService(ClientProtos.java:32675) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1618) ... 13 more 16/02/18 01:55:58 WARN ipc.CoprocessorRpcChannel: Call failed on IOException java.io.InterruptedIOException: Origin: InterruptedException at org.apache.hadoop.hbase.util.ExceptionUtil.asInterrupt(ExceptionUtil.java:65) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:322) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1622) at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:92) at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:89) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126) at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:95) at org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callMethod(CoprocessorRpcChannel.java:56) at org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService$Stub.addServerCache(ServerCachingProtos.java:3270) at org.apache.phoenix.cache.ServerCacheClient$1$1.call(ServerCacheClient.java:204) at org.apache.phoenix.cache.ServerCacheClient$1$1.call(ServerCacheClient.java:189) at org.apache.hadoop.hbase.client.HTable$16.call(HTable.java:1751) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.InterruptedException at java.lang.Object.wait(Native Method) at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1215) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.execService(ClientProtos.java:32675) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1618) ... 13 more 16/02/18 01:55:58 WARN client.HTable: Error calling coprocessor service org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService for row java.util.concurrent.ExecutionException: java.io.InterruptedIOException: Origin: InterruptedException at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1763) at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1719) at org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:188) at org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:182) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:172) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.InterruptedIOException: Origin: InterruptedException at org.apache.hadoop.hbase.util.ExceptionUtil.asInterrupt(ExceptionUtil.java:65) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:322) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1622) at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:92) at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:89) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126) at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:95) at org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callMethod(CoprocessorRpcChannel.java:56) at org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService$Stub.addServerCache(ServerCachingProtos.java:3270) at org.apache.phoenix.cache.ServerCacheClient$1$1.call(ServerCacheClient.java:204) at org.apache.phoenix.cache.ServerCacheClient$1$1.call(ServerCacheClient.java:189) at org.apache.hadoop.hbase.client.HTable$16.call(HTable.java:1751) at java.util.concurrent.FutureTask.run(FutureTask.java:266) ... 3 more Caused by: java.lang.InterruptedException at java.lang.Object.wait(Native Method) at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1215) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.execService(ClientProtos.java:32675) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1618)

============================================================================================

Then I have increased. phoenix time out to 10 mins (which not recommended beyond 3 min).

==================================================================================

Caused by: org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of 95157178 bytes could not be allocated from remaining memory of 104319336 bytes from global pool of 157915545 bytes after waiting for 10000ms. at org.apache.phoenix.memory.GlobalMemoryManager.allocateBytes(GlobalMemoryManager.java:81) at org.apache.phoenix.memory.GlobalMemoryManager.access$300(GlobalMemoryManager.java:33) at org.apache.phoenix.memory.GlobalMemoryManager$GlobalMemoryChunk.resize(GlobalMemoryManager.java:144) at org.apache.phoenix.join.HashCacheFactory$HashCacheImpl.<init>(HashCacheFactory.java:104) at org.apache.phoenix.join.HashCacheFactory$HashCacheImpl.<init>(HashCacheFactory.java:73) at org.apache.phoenix.join.HashCacheFactory.newCache(HashCacheFactory.java:66) at org.apache.phoenix.cache.TenantCacheImpl.addServerCache(TenantCacheImpl.java:87) at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:75)

==================================================================================

How can decide if there is any other timeout trick will solve this? Or if this is memory issue. How can i decide how much resources needed for large queries.

Thanks,

Harshal

1 ACCEPTED SOLUTION

avatar

Hi @Harshal Joshi,

For large joins , please use sort merge join to avoid memory issues.

You just need to pass following hint "/*+ USE_SORT_MERGE_JOIN */" in the query.

https://phoenix.apache.org/joins.html

View solution in original post

4 REPLIES 4

avatar

Hi @Harshal Joshi,

For large joins , please use sort merge join to avoid memory issues.

You just need to pass following hint "/*+ USE_SORT_MERGE_JOIN */" in the query.

https://phoenix.apache.org/joins.html

avatar
Contributor

Thanks @asinghal it also improves performance.

avatar

While Phoenix does let you do joins, they are something of an anti-pattern for NoSQL engines. Ideally you would pre-join and denormalize your data with Spark or Hive or Pig before storing it in Phoenix.

If that is not an option, use the sort-merge hint recommended above and/or use a filter predicate that restricts the input size of your join, e.g.:

select * from (

select * from customer where id = '123'

) a

join events on a.id = b.id

In the query above, you'd want id to be the first field in your rowkey/primary key. This lets the query execute as a RangeScan over _just_ the range of keys matching the specified id, instead of a full table scan. If you can't provide a restrictive predicate, Phoenix queries will still execute in parallel fashion, but north of several million rows, Hive or Spark queries over columnar compressed ORC or Parquet files will be significantly faster.

Phoenix also has integration with Spark, in case you need to read from live Phoenix tables to perform online analysis. Spark is much better suited for analytical queries than Phoenix, but keep in mind it will be a table-scan unless you use predicate pushdown configurations.

avatar
Contributor

@Randy Gelhausen thanks for the input. phoenix with spark is next stage of our roadmap. your input is valuable.