Support Questions

Find answers, ask questions, and share your expertise

Could not find hash cache error/time out errors

Explorer

I run phoenix(4.4.0 – thick client) left join query from Jmeter

I run the same query from 3 to 5 parallel threads and I run into the following error:

org.apache.phoenix.exception.PhoenixIOException: org.apache.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for joinId: �l͖LW||. The cache might have expired and have been removed.

at org.apache.phoenix.coprocessor.HashJoinRegionScanner.<init>(HashJoinRegionScanner.java:96)

at org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.doPostScannerOpen(GroupedAggregateRegionObserver.java:149)

at org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:177)

at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1318)

at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)

at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1748)

at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1712)

at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1313)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2259)

at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)

at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)

at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)

at java.lang.Thread.run(Thread.java:745)

When I increase the allocated heap in Jmeter from 4Gb for above to 6 Gb, I do not get this error..

Also I have to allocate memory proportional to number of parallel threads I run. Allocating more memory in the client application seems a costly option..

I tried increasing the phoenix.coprocessor.maxServerCacheTimeToLiveMs

from 30 to 120 secs then I started getting

org.apache.phoenix.exception.PhoenixIOException: org.apache.phoenix.exception.PhoenixIOException: 89367ms passed since the last invocation, timeout is currently set to 60000

Default phoenix client time out is 10 mins..

Any idea on how to resolve this?

1 REPLY 1

Explorer

Hey avatar image

Pradheep ShanI am also facing the similar issue, could you find out any ways to resolve it.

Thanks,

Prashant Verma