Member since
02-19-2015
3
Posts
0
Kudos Received
0
Solutions
12-05-2016
10:49 AM
Glad to help! Max compaction size config(hbase.hstore.compaction.max.size), *edit* looks like instead of the default you are setting that to 512MB. Yes that certainly is at least part of the issue. that effectively means that compaction will ignore any storefile larger than 512MB. I'm unsure what that will do to the ability to split when necessary. It's not something we set on our clusters. Leaving here for others: If you are relying on hbase to do the weekly major compact(hbase.hregion.majorcompaction), there is a difference in behavior between a externally initiated compaction and a internal system one. The system initiated compaction(hbase.hregion.majorcompaction) seems to trigger only a minor compaction when over the max number of regions a minor will consider (hbase.hstore.compaction.max). I am guessing this is due to a desire to not impact the system with a very long running major compaction. In your case, you will be constantly triggering only a minor compaction of that many stores every time Hbase considers that region for compaction. (hbase.server.compactchecker.interval.multiplier multiplied by hbase.server.thread.wakefrequency) This is especially true if you generate more hfiles than (hbase.hstore.compaction.max) in the time it takes to do (hbase.server.compactchecker.interval.multiplier multiplied by hbase.server.thread.wakefrequency + compact time). Externally initiated compaction, either through hbase shell or through the API, sets the compaction priority to high and does not consider (hbase.hstore.compaction.max).
... View more