Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

How to manage Impala connection exhaustion

How to manage Impala connection exhaustion

Explorer

Hello team,
We have different applications/scripts that are fetching data via Impala.

These are jdbc connections. The output of the queries ends up with exceptions

Caused by: com.cloudera.support.exceptions.GeneralException: [Simba][ImpalaJDBCDriver](500051) ERROR processing query/statement. Error Code: null, SQL state: HY000,

The issue gets resolved after a service restart of Impala.

Is there a way to check how many open connections ? Please let us know.

Thanks and regards
Sayak

6 REPLIES 6

Re: How to manage Impala connection exhaustion

Master Collaborator
You can try to monitor the number of queries in the Impala Daemon's web ui.

Re: How to manage Impala connection exhaustion

Explorer
Thank you. Is there any other way apart from web ui. Like from command interface ?
Highlighted

Re: How to manage Impala connection exhaustion

Master Collaborator
No, but you can easily write a script to get those data from the Impala Daemon rest api

Re: How to manage Impala connection exhaustion

Guru

You might also create triggers in CM by going to CM > Impala > Charts on the right hand side > Queries Across Impala Daemons

 

Screen Shot 2018-12-24 at 10.31.53 am.png

 

And then follow on screen instructions. 

 

More details here:

https://www.cloudera.com/documentation/enterprise/latest/topics/cm_dg_triggers.html

Re: How to manage Impala connection exhaustion

Champion

If you are looking for number of sessions against HS2 in impala you can use the chart library that you can found in CM -> IMPALA -> Chart library - > 

Re: How to manage Impala connection exhaustion

Contributor

You should look at fe_service_threads, what is your current setting and consider increasing it if needed. When you are seeing exception/timeout on connection have a look at dashboard using following ts_query 

'select thrift_server_hiveserver2_frontend_connections_in_use' if this value is approaching the default setting of fe_service_threads then you know your running out of connections. The default setting can be viewed in the service page/varz.

Don't have an account?
Coming from Hortonworks? Activate your account here