- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Do you increase zookeeper max session timeout to handle region server GC pauses?
- Labels:
-
Apache HBase
Created ‎04-13-2016 06:59 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Default zookeeper max session time out is 20 times the time tick value. Many times we notice GC leading to longer pauses on Hbase region server end causing it to lose its ephemeral znode on zookeeper and hence, being marked dead by the master. One way to address this could be to increase zookeeper maxSessionTimeout to something like say, 90 sec or so because some of our GC pauses are around that mark. The downside being that most of the client applications are now allowed to negotiate a higher session timeout value and it takes longer for master to mark down a dead server.
However, do you usually increase this value to address GC pause issues?
Created ‎04-13-2016 08:18 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Sumit
Increasing the zookeeper session timeout is often a quick first fix to GC pause "killing" in Hbase. In the longer run If you have GC pauses is because your process is trying to find memory.
There can be architectural approaches to this problem: For example does this happen during heavy writes loads in which case you consider doing bulk load when possible.
You can also look at your hbase configuration what is your overall allocated memory for Hbase and how is distributed for writes and reads. Do you flush your memstore often, does this lead to many compactions?
Lastly you can look at GC tuning. I won't dive into this one but Lars has done a nice introduction blog post on this here:http://hadoop-hbase.blogspot.ie/2014/03/hbase-gc-tuning-observations.html
Hope any of this helps
Created ‎04-13-2016 08:18 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Sumit
Increasing the zookeeper session timeout is often a quick first fix to GC pause "killing" in Hbase. In the longer run If you have GC pauses is because your process is trying to find memory.
There can be architectural approaches to this problem: For example does this happen during heavy writes loads in which case you consider doing bulk load when possible.
You can also look at your hbase configuration what is your overall allocated memory for Hbase and how is distributed for writes and reads. Do you flush your memstore often, does this lead to many compactions?
Lastly you can look at GC tuning. I won't dive into this one but Lars has done a nice introduction blog post on this here:http://hadoop-hbase.blogspot.ie/2014/03/hbase-gc-tuning-observations.html
Hope any of this helps
Created ‎04-13-2016 09:00 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
So, one of the things we tried was to increase eden space. Ideally, it would be better that block cache can remain in tenured while memstore mostly does not get promoted. This is because memstore flush would anyway push them out of heap. Increasing eden seems a good choice because it reduced a lot of our GC pauses. We also tried using G1 collector but despite hearing so many good things about it, we could not tune it enough to help us with hbase. In our case, writes happen both in bursts as well as at ~constant rate. Reads are usually spanning a lot of regions due to our salting of rowkeys. Could not understand your point about compactions though? Would more compactions lead to larger pauses?
Created ‎04-13-2016 06:18 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes compaction , invalidate the block cache hence results in more GC.
Created ‎04-14-2016 04:10 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Ok, I was not aware that major compaction would invalidate block cache. Not sure why that should be so, though. Any link where I can read more on this?
Created ‎05-02-2016 02:49 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Ok, I figured there are setting which can control whether we want block cache invalidation when major compaction happens. In my case that setting is disabled, however.
Created ‎04-13-2016 08:42 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If you don't have latency sensitive application then you should go ahead and increase it.
But I would suggest to tune your GC and do capacity planning as per load. If some nodes are experiencing more GC than others then you might need to check the hot spots and if your query is scanning full table then you might try skipping block cache for that scan.
Created ‎04-13-2016 08:55 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Overall, increasing timeout with zookeeper caused us to not lose a single region server instance due to session timeout. Plus, we also use Phoenix on top of Hbase and avoid full table scans with some interesting options such as skip-scans.
Created ‎04-14-2016 12:57 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
GC pauses are mostly expected when there is frequent eviction from block cache( with un-repeated or large scans loading new blocks in cache everytime, with frequent compaction, with scans on table which is not well distributed across regionservers making one regionserver's cache hot etc. ).
PS:- It is not necessary that phoenix always use skip-scan filter for every query for eg:- it will not use it if you don't have leading columns of the rowkey in your "where" clause as it will not know to which key to seek.
Created ‎04-14-2016 04:09 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes, agree about your point on skip-scan. We always use leading columns in where.
