We are using the end point co-processor to fetch the records from my HBase cluster.
We are having the 3 nodes cluster and total number of regions are 180 .
Call to the end point co-processor is taking the more time than the usual , after all the analysis the property I am doubting is hbase.regionserver.handler.count which is 30 by default. And my client code is making the Batch call to the coprocessor ,
And there are 10 such Batch call which are may be simultaneous and each batch call creates the 180 separate threads, so total number threads at the client end will be 1800 sometimes .
I have changed the hbase.regionserver.handler.count from 30 to 100 still not seeing any performance improvement much .
Now I am coming to the question :
What is the feasible value for the property hbase.regionserver.handler.count ?
How to know whether that property is impacting the performance or not ?
In order to increase the property what are the other values I should be modifying for the proper functioning .
Thanks in Advance .
In theory it would be one per client, not sure if you should try 180 or 1800. Both are pretty high though.
You should set it to the # of CPU Cores available on all the region servers. Depending on data size, perhaps you need more nodes in the cluster? How big is this data? What version of HBase? What version of Hadoop? What JDK version? How much RAM on the nodes? How big is each region?
Did you restart after changing the parameter?
Are you looking at the HBase Master, JMX, Logs, Stack Trace and other diagnostics provided by HBase. Also Ambari and other monitoring tools you have may help.
Anything in the logs?
Do you need the end point coprocessor?
Can you scan the data? Read with Spark? Read with NiFi? Or Read through Phoenix as a SQL query?