Support Questions

Find answers, ask questions, and share your expertise

Getting error when using Phoenix Driver in multiple threads, even if create new conn in each thread via DriverManager

New Contributor

Hi, I'm using a phoenix JDBC driver version 4.7.0.2.6.2.0-205. I noticed that when multiple threads call preparedStatement.executeQuery() at the same time, the Phoenix driver throws an exception (see below). The query is a join between 4 tables and takes about 15 seconds to complete. If the threads run serially then they all succeed. All threads run on separate connections.

Does it mean that Phoenix driver is not thread safe even if each thread owns its own connection ?

java.sql.SQLException: Encountered exception in sub plan [0] execution.
    at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:197)
    at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:142)
    at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:137)
    at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:281)
    at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:266)
    at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
    at org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:265)
    at org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:186)
    at com.eagleinvsys.performance.PerfAggregateSelector.getPerfAggregatesWithWeights(PerfAggregateSelector.java:77)
    at com.eagleinvsys.performance.PerfAggregateSelector.run(PerfAggregateSelector.java:49)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at least 1487103930 bytes, but had 13
    at org.apache.phoenix.schema.KeyValueSchema.next(KeyValueSchema.java:211)
    at org.apache.phoenix.expression.ProjectedColumnExpression.evaluate(ProjectedColumnExpression.java:115)
    at org.apache.phoenix.schema.KeyValueSchema.toBytes(KeyValueSchema.java:112)
    at org.apache.phoenix.execute.TupleProjector.projectResults(TupleProjector.java:244)
    at org.apache.phoenix.execute.TupleProjectionPlan$1.next(TupleProjectionPlan.java:77)
    at org.apache.phoenix.join.HashCacheClient.serialize(HashCacheClient.java:124)
    at org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:85)
    at org.apache.phoenix.execute.HashJoinPlan$HashSubPlan.execute(HashJoinPlan.java:384)
    at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:166)
    at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:162)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    ... 1 more
Caused by: java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at least 1487103930 bytes, but had 13
    at org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:443)
    at org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
    ... 15 more
2 REPLIES 2

Super Collaborator

I would suggest checking the region server logs. You should keep in mind that for HashJoin one of the tables (right one as far as I remember) is sending to all region server and kept in the cache (memory). If it's relatively big you may run in the problem when region servers runs out of free memory if several heavy queries are executing at the same time. Easy way to check is to run the same queries in parallel but in different processes.

New Contributor

I would agree with you but the SQL exception is thrown right away when the query is executed on the 2 thread. If I synchronize the two query execute calls, then both succeed. It looks like a threading issue on the phoenix driver.