Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Phoenix errors on Hbase slow down not closing properly

Phoenix errors on Hbase slow down not closing properly

New Contributor

I have a highly available real time Java based application which is making connection to Hbase using Phoenix JDBC client. The issue is that in case my Hbase slows down (spikes in read and write latency in Ambari), so during this time my read and write queries start taking longer duration of time and many a time they fail. I am ok with them failing but my real concern is that whenever the queries fail, my linux system shows an increase in the number of open files in for the Java process. We are closing the JDBC connection properly in finally blocks and I am not sure if somewhere in the phoenix jar, code is causing the connection to still remain open. I have a Tomcat connection pool set up for the java application to connect to Hbase/Phoenix.

I am using Phoenix phoenix-4.7.0.2.5.0.0-1245-client jar. Appreciate any help anyone has on this.

8 REPLIES 8

Re: Phoenix errors on Hbase slow down not closing properly

Have you inspected the metrics exposed for HBase via Grafana and the Ambari Metrics System?

Also, have you looked at what the open files actually are? This is not necessarily indicative of a problem as files (and sockets) will be opened as your application uses HBase.

Re: Phoenix errors on Hbase slow down not closing properly

New Contributor

From application side, we see my application throwing error as Too Many Open Files. My application is configured to open 4096 hard files in OS at most and once we reach this the application starts throwing too many open files. Now i can increase the number of OS files open-able by application but that would not be ideal. What is happening is that when my Hbase slows down, the number of open files increase drastically, i am concerned about why this increase is happening.....the data load or user load is not increasing neither is the memory utilization of the application server. We only see an increase in TCP connections to Hbase, which i believe would translate to OS level files in Linux.
Also can you help me out as to what i need to look for in the Grafana and Ambari Metrics?

Re: Phoenix errors on Hbase slow down not closing properly

4096 open files is not nearly enough for how HDFS and HBase operate. I believe Ambari-based installations set a maximum of 65k open files by default.

Re: Phoenix errors on Hbase slow down not closing properly

New Contributor

Hi Josh,
I am sorry that my answer was ambiguous. The files are being opened by application server not HDFS/Hbase. They are opened under JDBC connection between application and Phoenix/HBase. The 4096 value is at application level not the HBase

Re: Phoenix errors on Hbase slow down not closing properly

Super Collaborator

bq. increase in the number of open files in for the Java process.

Which process did you refer to ?

I assume you were talking about your application.

Highlighted

Re: Phoenix errors on Hbase slow down not closing properly

Super Collaborator

bq. closing the JDBC connection properly in finally blocks

Mind sharing the relevant code ?

Thanks

Re: Phoenix errors on Hbase slow down not closing properly

New Contributor

@Josh Elser so you mean that we need to configure our application to allow 65k open files in OS? Is that the recommended value?

@Ted, The files are in java process. I checked that the finally block has the necessary lines to close the connection. The code is pretty straight forward. Do let me know if you feel anything is fishy

	public static void closeHbaseConnections(Connection con, PreparedStatement ps, ResultSet rs){
   try { if (rs != null) rs.close(); } catch (Exception e) {log.error(e.getMessage());};
       try { if (ps != null) ps.close(); } catch (Exception e) {log.error(e.getMessage());};
       try { if (con != null) con.close(); } catch (Exception e) {log.error(e.getMessage());};
   }
	

Re: Phoenix errors on Hbase slow down not closing properly

I can't tell you how many open files your application will need, but HBase/HDFS should both have 65k open files configured. I would guess that your application should have more than 4k open files allowed.