Support Questions

Find answers, ask questions, and share your expertise
Celebrating as our community reaches 100,000 members! Thank you!

HBASE Too many WALs issue and hbase.regionserver.maxlogs setting

New Contributor

Good morning,


I have a problem with the maximum number of WALs.
In the log file it is written: org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL | regionserver / slave01-int: 16020.logRoller | Too many WALs; count = 534, max = 102; forcing flush of 26 regions (s)
Looking for this parameter in HBASE settings, it is not declared.

Following this guide -> specifically:
Flush Queue / Compaction queue - specifies to set as follows:
hbase.regionserver.maxlogs (default value is 32, double or triple up this number if you know you have a heavy write load).
After setting the value hbase.regionserver.maxlogs = 500.
and having restarted HBASE, the WALs files moved under the OLDWALs directory, and then within a few minutes they disappeared.

We want to understand if those WALs moved to OLDWALs were processed correctly and what is the correct parameter to set (is it calculated in some way?) In hbase.regionserver.maxlogs.
What does the number after hbase.regionserver.maxlogs mean?




New Contributor


After adding parameters as Cloudera guidelines suggest:


  • hbase.regionserver.maxlogs=32
  • hbase.regionserver.logroll.multiplier= 0.95
  • hbase.regionserver.hlog.blocksize=134217728

Now the new WALs files are in the WALs directory.

It's OK than the WALs dirs growing?

When the files will delete?


|LogRotationDir|WARN |org.apache.hadoop.hbase.regionserver.LogRoller|regionserver/slave01-int:16020.logRoller|Failed to schedule flush of 648987d093a56650ccf6763c479505bf, region=null, requester=null

Any advice to aim this issue?