Member since
03-04-2020
1
Post
0
Kudos Received
0
Solutions
03-04-2020
08:15 AM
I'm using Phoenix Queryserver Client 4.7.0.2.6.5.0-292 with HBase 2.1.4.
I'm running multiple select queries to get all data from a number of different tables, one after the other. The tables have ~4m rows in each.
My problem is that, after 10 minutes, my resultSet closes with this exception:
AvaticaClientRuntimeException: Remote driver error: RuntimeException: java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is closed. -> SQLException: ERROR 1101 (XCL01): ResultSet is closed.. Error -1 (00000) null
java.lang.RuntimeException: java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is closed.
at org.apache.calcite.avatica.jdbc.JdbcMeta.propagate(JdbcMeta.java:683)
at org.apache.calcite.avatica.jdbc.JdbcMeta.fetch(JdbcMeta.java:811)
at org.apache.calcite.avatica.remote.LocalService.apply(LocalService.java:253)
at org.apache.calcite.avatica.remote.Service$FetchRequest.accept(Service.java:1354)
at org.apache.calcite.avatica.remote.Service$FetchRequest.accept(Service.java:1321)
at org.apache.calcite.avatica.remote.AbstractHandler.apply(AbstractHandler.java:94)
at org.apache.calcite.avatica.remote.ProtobufHandler.apply(ProtobufHandler.java:46)
at org.apache.calcite.avatica.server.AvaticaProtobufHandler$2.call(AvaticaProtobufHandler.java:123)
at org.apache.calcite.avatica.server.AvaticaProtobufHandler$2.call(AvaticaProtobufHandler.java:121)
at org.apache.phoenix.queryserver.server.Main$PhoenixDoAsCallback$1.run(Main.java:373)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
at org.apache.phoenix.queryserver.server.Main$PhoenixDoAsCallback.doAsRemoteUser(Main.java:370)
at org.apache.calcite.avatica.server.HttpServer$Builder$1.doAsRemoteUser(HttpServer.java:758)
at org.apache.calcite.avatica.server.AvaticaProtobufHandler.handle(AvaticaProtobufHandler.java:120)
at org.apache.phoenix.shaded.org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:542)
at org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
at org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.apache.phoenix.shaded.org.eclipse.jetty.server.Server.handle(Server.java:499)
at org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
at org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at org.apache.phoenix.shaded.org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
at org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is closed.
at org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:443)
at org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at org.apache.phoenix.jdbc.PhoenixResultSet.checkOpen(PhoenixResultSet.java:215)
at org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:772)
at org.apache.calcite.avatica.jdbc.StatementInfo._next(StatementInfo.java:85)
at org.apache.calcite.avatica.jdbc.StatementInfo.next(StatementInfo.java:81)
at org.apache.calcite.avatica.jdbc.JdbcResultSet.frame(JdbcResultSet.java:146)
at org.apache.calcite.avatica.jdbc.JdbcMeta.fetch(JdbcMeta.java:807)
... 24 more
This happens regardless of how many rows have been read -- sometimes it's 80,000, other times it's 25,000, it's always after 10 minutes (or so). I've debugged my client code and I can see that resultSet.isClosed() is false, and that it always happens on the 99th row out of 100. I can see that there is data in the 100th row, but the client still throws that exception on the 99th. I've tried setting the fetch size and the query timeout on the resultSet and preparedStatement to no avail.
I've gotten around this using keyset pagination and making another query every time the client experiences this error, but when I have to read from a number of tables with millions of rows and make a query every ~80,000 rows, the entire query is taking quite a lot of time. Is there any reason I should be experiencing a prematurely closed resultSet?
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix