Support Questions
Find answers, ask questions, and share your expertise
Check out our newest addition to the community, the Cloudera Innovation Accelerator group hub.

Caused by: org.apache.thrift.transport.TTransportException: Broken pipe

I get below error sometimes while running Impala query  through JDBC connection using Hive jars:


java.sql.SQLException: Error while cleaning up the server resources

                at org.apache.hive.jdbc.HiveConnection.close(

                at gxs.core.hadoop$with_impala_connection_STAR_.invoke(Unknown Source)

Caused by: org.apache.thrift.transport.TTransportException: Broken pipe

                at org.apache.thrift.transport.TIOStreamTransport.flush(

                at org.apache.thrift.TServiceClient.sendBase(

                at org.apache.hive.service.cli.thrift.TCLIService$Client.send_CloseSession(

                at org.apache.hive.service.cli.thrift.TCLIService$Client.CloseSession(

                at org.apache.hive.jdbc.HiveConnection.close(

                ... 25 more

Caused by: Broken pipe

                at Method)





                at org.apache.thrift.transport.TIOStreamTransport.flush(

                ... 29 more


How to resolve this ?


Cloudera Employee

Is the impalad that was the destination of the connection still up/healthy?

yes,but still get this problem many times.

Expert Contributor

Is there any loadbalancer inbetween the client and impalad? if yes, try to increase the conntimeout on the LB

yes i am using haproxy where i have configured my all impalad nodes and this url is used by jdbc connection.

Expert Contributor

Can you paste the haproxy.cfg here. Just want to have a look at the connection timeout configured


The error you see happens if the connection between client and impalad gets broken. having a lower connection timeout on haproxy can potentially cause this

just a correction, we are using haproxy in UAT env but for PROD we are using  VIP which is created by infrastructure team. here is the UAT config:


bash-4.1$ vi haproxy-cdh.cfg
user impala
group impala

# turn on stats unix socket
#stats socket /var/lib/haproxy/stats

# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
# You might need to adjust timing values to prevent timeouts.
# mode http
# option httplog
option dontlognull
option http-server-close
option redispatch
retries 3
maxconn 1000
timeout connect 300000
timeout client 300000
timeout server 300000

# This sets up the admin page for HA Proxy at port 25002.
listen stats :25002
mode http
stats enable
stats auth username:password

# This is the setup for Impala. Impala client connect to load_balancer_host:25003.
# HAProxy will balance connections among the list of servers listed below.
# The list of Impalad is listening at port 21000 for beeswax (impala-shell) or original ODBC driver.
# For JDBC or ODBC version 2.x driver, use port 21050 instead of 21000.
#listen impala :25053
listen impala :25003
timeout client 3600000
timeout server 3600000
balance leastconn


Expert Contributor

Is the issue reported in this thread happened in UAT or in PROD?


From UAT configs, the default timeouts are around 5 minutes and overridden timeouts are around 1 hr.


Is the error that you posted happens for long running jdbc application maintaining a single connection ? Can you check if you increase the timeouts[lets say 2hr] further, the issue still happens?

the reported issue is in PROD though we have got the error in UAT too but very rare. we are having 15 JDBC  connection  in connection pool. 

New Contributor
I am having the same issue..using c3p0 pool if i retry it works any sample code of connection pooling