Support Questions

Find answers, ask questions, and share your expertise

Getting connection reset exception when the hive table (JDBC resultset) contains huge data

avatar
Explorer

Dear All,

 

 

I am using Hive JDBC client (hive-jdbc-1.1.0-cdh5.10.0.jar , hive version is 1.1.0-cdh5.13.0) .I have run select * query on a table contain 1.5 crore records .While iterating over the resultset I am getting the below error after iterating some million of records.How to fix this issue?

 

java.sql.SQLException: Error retrieving next row
	at org.apache.hive.jdbc.HiveQueryResultSet.next(HiveQueryResultSet.java:387)
	at com.solix.bigdata.commons.io.SolrHandler.indexTableContent(SolrHandler.java:1161)
	at com.solix.bigdata.commons.io.DataIndexingThread.run(DataIndexingThread.java:1090)
Caused by: org.apache.thrift.transport.TTransportException: java.net.SocketException: Connection reset
	at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
	at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
	at org.apache.thrift.transport.TSaslTransport.readLength(TSaslTransport.java:376)
	at org.apache.thrift.transport.TSaslTransport.readFrame(TSaslTransport.java:453)
	at org.apache.thrift.transport.TSaslTransport.read(TSaslTransport.java:435)
	at org.apache.thrift.transport.TSaslClientTransport.read(TSaslClientTransport.java:37)
	at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
	at org.apache.hadoop.hive.thrift.TFilterTransport.readAll(TFilterTransport.java:62)
	at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
	at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
	at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
	at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77)
	at org.apache.hive.service.cli.thrift.TCLIService$Client.recv_FetchResults(TCLIService.java:501)
	at org.apache.hive.service.cli.thrift.TCLIService$Client.FetchResults(TCLIService.java:488)
	at sun.reflect.GeneratedMethodAccessor158.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:497)
	at org.apache.hive.jdbc.HiveConnection$SynchronizedHandler.invoke(HiveConnection.java:1286)
	at com.sun.proxy.$Proxy36.FetchResults(Unknown Source)
	at org.apache.hive.jdbc.HiveQueryResultSet.next(HiveQueryResultSet.java:363)
	... 2 more
Caused by: java.net.SocketException: Connection reset
	at java.net.SocketInputStream.read(SocketInputStream.java:209)
	at java.net.SocketInputStream.read(SocketInputStream.java:141)
	at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
	at sun.security.ssl.InputRecord.read(InputRecord.java:503)
	at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:973)
	at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:930)
	at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
	at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
	at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
	at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
	at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
	... 21 more

 

3 REPLIES 3

avatar
Champion

what file format are using ? 

does your table stats being collected ? 

do you use partitioning or bucketing in your table ? 

select* from table without limit clause will hurt the performance as it is bad query 

do you really need to pull in all the record ? 

 

avatar
Explorer
File format :ORC
No partitioning or bucketing
I need to pull all the records

avatar
Champion

did you try runining the query in the hive shell or beeline ? 

was your hiveserver2 up and runining during the query execution time ? 

 

you may want to take peek in this current settings in your cluster 

 

hive.server2.session.check.interval 
hive.server2.idle.operation.timeout  
hive.server2.idle.session.timeout