Support Questions

Find answers, ask questions, and share your expertise

Could not find hash cache error/time out errors


I run phoenix(4.4.0 – thick client) left join query from Jmeter

I run the same query from 3 to 5 parallel threads and I run into the following error:

org.apache.phoenix.exception.PhoenixIOException: org.apache.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for joinId: �l͖LW||. The cache might have expired and have been removed.

at org.apache.phoenix.coprocessor.HashJoinRegionScanner.<init>(

at org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.doPostScannerOpen(

at org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(

at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$

at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$

at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(

at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(

at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(

at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(

at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(



at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(

at org.apache.hadoop.hbase.ipc.RpcExecutor$


When I increase the allocated heap in Jmeter from 4Gb for above to 6 Gb, I do not get this error..

Also I have to allocate memory proportional to number of parallel threads I run. Allocating more memory in the client application seems a costly option..

I tried increasing the phoenix.coprocessor.maxServerCacheTimeToLiveMs

from 30 to 120 secs then I started getting

org.apache.phoenix.exception.PhoenixIOException: org.apache.phoenix.exception.PhoenixIOException: 89367ms passed since the last invocation, timeout is currently set to 60000

Default phoenix client time out is 10 mins..

Any idea on how to resolve this?



Hey avatar image

Pradheep ShanI am also facing the similar issue, could you find out any ways to resolve it.


Prashant Verma