Reply
Contributor
Posts: 47
Registered: ‎07-27-2015

The folder hbase's oldWALs is so large in CDH5.3.2 & CM5.3.2, how to clean?

HI: Our production environment are CDH5.3.2 & CM5.3.2. the files size including in /hbase/oldWALs are so large, the size more than 11TB. For our 20 nodes cluster the hard disk cost too much. From google, the behavior maybe related hbase.replication configuration We have disabed hbase.replication at CM GUI when we build the cluster. we can't get hbase.replication=false from hbasehost:60010/conf, but hbase.replication wasn't be show in the xml text, it seem the configuration did not in force. the below are the logs: 2015-12-28 11:19:09,858 WARN org.apache.hadoop.hbase.master.cleaner.CleanerChore: A file cleanermaster:master:60000.oldLogCleaner is stopped, won't delete any more files in:hdfs://iwgameNS/hbase/oldWALs 2015-12-28 11:20:09,802 WARN org.apache.hadoop.hbase.master.cleaner.CleanerChore: A file cleanermaster:master:60000.oldLogCleaner is stopped, won't delete any more files in:hdfs://iwgameNS/hbase/oldWALs 2015-12-28 11:21:09,990 WARN org.apache.hadoop.hbase.master.cleaner.CleanerChore: A file cleanermaster:master:60000.oldLogCleaner is stopped, won't delete any more files in:hdfs://iwgameNS/hbase/oldWALs I would like to know how to clean the floder? by manual or automatic? Any one has any suggestion? Any advice is appreciated. Thank you in advance.
Contributor
Posts: 47
Registered: ‎07-27-2015

Re: The folder hbase's oldWALs is so large in CDH5.3.2 & CM5.3.2, how to clean?

any update?
Contributor
Posts: 47
Registered: ‎07-27-2015

Re: The folder hbase's oldWALs is so large in CDH5.3.2 & CM5.3.2, how to clean?

The issue is emergency for us, who can help me?
Contributor
Posts: 47
Registered: ‎07-27-2015

Re: The folder hbase's oldWALs is so large in CDH5.3.2 & CM5.3.2, how to clean?

HI, Can anyone give me some feedback?

      

Expert Contributor
Posts: 101
Registered: ‎01-24-2014

Re: The folder hbase's oldWALs is so large in CDH5.3.2 & CM5.3.2, how to clean?

I have this problem as well in cdh5.4.9

 

I found the following: http://stackoverflow.com/questions/28725364/hbase-oldwals-what-it-is-and-how-can-i-clean-it

 

however I do not have a replication target, and i don't see any log cleaner messages in my logs.

Contributor
Posts: 47
Registered: ‎07-27-2015

Re: The folder hbase's oldWALs is so large in CDH5.3.2 & CM5.3.2, how to clean?

HI, Thanks for your reply. I used the list_peers command in our hbase cluster, and there are not any peer in it. So, can i delete the file in the oldWALs folder directly? Thanks Paul
Posts: 1,894
Kudos: 432
Solutions: 302
Registered: ‎07-31-2013

Re: The folder hbase's oldWALs is so large in CDH5.3.2 & CM5.3.2, how to clean?

The cleaner will not run a delete on the oldWALs automatically typically if there's a replication znode holding/tracking it (usually when you use replication features, or have used it in past, or have used/do use lily-based HBase indexer services, which works via replication), or when there's a snapshot tracking the WAL file for an unflushed region snapshot.

Since your WAL has existed for a very long time, it is likely to be the former remnant in some form, rather than snapshots cause regions are periodically emptied from the memstore (during maxlogs limit hits, which forces a flush on all memstore regions).

Please post the output of "ls /hbase/replication" (and sub-znodes under it) via your "zookeeper-client" shell command. If there are any znodes under there, you will need to clean them up with rm/rmr in the same shell. Once done, try restarting the HMaster and the cleaner should be able to wipe it away.

If you are a hundred percent sure you do not have any form of replication whatsoever in use, nor have any snapshots, you may choose to also delete the oldWAL directory files manually.
Highlighted
Posts: 1,894
Kudos: 432
Solutions: 302
Registered: ‎07-31-2013

Re: The folder hbase's oldWALs is so large in CDH5.3.2 & CM5.3.2, how to clean?

BTW, CDH 5.3.2 is a bad release to be on due to the identified issue of HDFS-7960 and HDFS-7575. Please consider moving to the latest dot release of 5.3.x available, if not 5.5.x.
Expert Contributor
Posts: 101
Registered: ‎01-24-2014

Re: The folder hbase's oldWALs is so large in CDH5.3.2 & CM5.3.2, how to clean?

[zk: localhost:2181(CONNECTED) 0] ls /hbase/replication
[peers, rs]
[zk: localhost:2181(CONNECTED) 1] ls /hbase/replication/peers
[]
[zk: localhost:2181(CONNECTED) 2] get /hbase/replication/peers

cZxid = 0x30000c30d
ctime = Tue Sep 08 14:45:57 CDT 2015
mZxid = 0x30000c30d
mtime = Tue Sep 08 14:45:57 CDT 2015
pZxid = 0x30000c30d
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 0
numChildren = 0
[zk: localhost:2181(CONNECTED) 3] get /hbase/replication

cZxid = 0x30000c304
ctime = Tue Sep 08 14:45:57 CDT 2015
mZxid = 0x30000c304
mtime = Tue Sep 08 14:45:57 CDT 2015
pZxid = 0x30000c30d
cversion = 2
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 0
numChildren = 2


/hbase/replication/rs has all of the regionservers in the cluster, with nothing in it. 

note: we do have hbase.replication set to true, just in case that we need to replicate in the future. This is a practice that we had carried over from cdh 4.6.  I can confirm that we have never actually had a replication peer in the cluster in question.

Contributor
Posts: 47
Registered: ‎07-27-2015

Re: The folder hbase's oldWALs is so large in CDH5.3.2 & CM5.3.2, how to clean?

For cdh5.3.2 with CM 5.3.2, It seems the hbase.replication can not be set to false with check box choicing in CM admin console,
From the link http://mail-archives.apache.org/mod_mbox/hbase-user/201503.mbox/%3CCANZa=GsTEEvRwi4NJqfUHdi_pc48c1TR...
is this a bug with cdh5.3.2 with CM 5.3.2?
Announcements