Reply
Highlighted
Explorer
Posts: 19
Registered: ‎01-13-2017

How to manage Impala connection exhaustion

Hello team,
We have different applications/scripts that are fetching data via Impala.

These are jdbc connections. The output of the queries ends up with exceptions

Caused by: com.cloudera.support.exceptions.GeneralException: [Simba][ImpalaJDBCDriver](500051) ERROR processing query/statement. Error Code: null, SQL state: HY000,

The issue gets resolved after a service restart of Impala.

Is there a way to check how many open connections ? Please let us know.

Thanks and regards
Sayak

Master
Posts: 430
Registered: ‎07-01-2015

Re: How to manage Impala connection exhaustion

You can try to monitor the number of queries in the Impala Daemon's web ui.
Explorer
Posts: 19
Registered: ‎01-13-2017

Re: How to manage Impala connection exhaustion

Thank you. Is there any other way apart from web ui. Like from command interface ?
Master
Posts: 430
Registered: ‎07-01-2015

Re: How to manage Impala connection exhaustion

No, but you can easily write a script to get those data from the Impala Daemon rest api
Champion
Posts: 776
Registered: ‎05-16-2016

Re: How to manage Impala connection exhaustion

If you are looking for number of sessions against HS2 in impala you can use the chart library that you can found in CM -> IMPALA -> Chart library - > 

Cloudera Employee
Posts: 761
Registered: ‎03-23-2015

Re: How to manage Impala connection exhaustion

You might also create triggers in CM by going to CM > Impala > Charts on the right hand side > Queries Across Impala Daemons

 

Screen Shot 2018-12-24 at 10.31.53 am.png

 

And then follow on screen instructions. 

 

More details here:

https://www.cloudera.com/documentation/enterprise/latest/topics/cm_dg_triggers.html

Contributor
Posts: 34
Registered: ‎03-07-2017

Re: How to manage Impala connection exhaustion

You should look at fe_service_threads, what is your current setting and consider increasing it if needed. When you are seeing exception/timeout on connection have a look at dashboard using following ts_query 

'select thrift_server_hiveserver2_frontend_connections_in_use' if this value is approaching the default setting of fe_service_threads then you know your running out of connections. The default setting can be viewed in the service page/varz.