Reply
Highlighted
ICA
New Contributor
Posts: 1
Registered: ‎08-23-2017

Connection Pooling Using Hbase for mobile applciation

Can you suggest on the below scenario

The requirement from Customer side was to establish a maximum of 40 k concurrent connections in HBase with an effective response time of 3 seconds. Users would be accessing database using mobile application expecting a reasonable response time for reading and writing the data (no retention period was specified by Customer hence we considered the growing data in future). HBase was used as the database for the Proof of Concept thinking there would be considerable amount of data and real time processing would also be required.

Test Cases

The Feasibility of establishing the concurrent connections using HBase in different ways was checked and details are given below. The distribution used is cloudera and the environment was in Azure cloud.

  1. The normal HBase architecture was used initially in which user needs to open connection each time to access/modify data. The parameter Zookeeper Timeout was set to low and we could establish up to 198 connections, after which all the connections were going timed out. The conclusion is that creating connections each time for accessing data for large number of users can’t be established in this way even though we set Max Client Connection to zero to allow maximum number of connections. In fact, the number of connections is dependent on the server CPU core and RAM. Considering the configuration limitations of local machine as well the development cloud Lab, we had to go with the second approach.
  2. The second approach was to use an HBase Rest server on top of HBase for keeping a long running client connection to HBase, so that users don’t need to open connection each time. The ideal scenario would be to keep load balancer on top of multiple HBase Rest Servers, say 3 so that the load on the HBase Rest Server would be shared. This architecture was tested using two different table models.
  3. Keep the transactions as different columns so that same user’s data will be stored in different rows, with fixed number of columns. Fetching data require column filtering. The testing was done by creating ScanID for 100 rows in the rest server .But the number connections established for reading this data simultaneously was around 1500 with reasonable response time. The response time we got was 4 to 7 seconds. But when we increased the number of connections, we could find more number of connections was failing with time out error and the server were getting hang. Also the response time were going high. Hence we decided to change the table model mentioned in step b.

 

  1. Keep individual users’ transactions in a single row, each transaction would be considered as single column so that filtering operation on columns can be avoided. Read the row fully for a user and parse the data according to the requirement. We tried could establish up to 8k connections with less data in the table(very less columns and rows) after changing the Parameters such as Heap Memory, Scanner Cache ,Zookeeper Timeout and Region Handler Count. But we started facing problem when inserted around 3 TB data (with 10k columns and 1 million columns).We could identify that the response time were going high when the number of connections was increased. But could not identify the bottle neck in the Cloud/Rest Server to finalize the scalability factor.

 

 

Different Parameters we set in Cloudera  Manager is as below

ZooKeeper Session Timeout : zookeeper.session.timeout -60000

HBase Client Scanner Caching : hbase.client.scanner.caching - 100

HBase Master Handler Count : hbase.master.handler.count - 25

HBase RegionServer Handler Count : hbase.regionserver.handler.count -10000

Java Heap Size of HBase REST Server in Bytes - 1 GiB

Java Heap Size of HBase RegionServer in Bytes - 16 GiB

Java Heap Size of ZooKeeper Server in Bytes - 1 GiB

Maximum Client Connections : maxClientCnxns - 0

Maximum Session Timeout : maxSessionTimeout - 60000

Announcements