Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Who Agreed with this topic

The folder hbase's oldWALs is so large in CDH5.3.2 & CM5.3.2, how to clean?

Explorer
HI: Our production environment are CDH5.3.2 & CM5.3.2. the files size including in /hbase/oldWALs are so large, the size more than 11TB. For our 20 nodes cluster the hard disk cost too much. From google, the behavior maybe related hbase.replication configuration We have disabed hbase.replication at CM GUI when we build the cluster. we can't get hbase.replication=false from hbasehost:60010/conf, but hbase.replication wasn't be show in the xml text, it seem the configuration did not in force. the below are the logs: 2015-12-28 11:19:09,858 WARN org.apache.hadoop.hbase.master.cleaner.CleanerChore: A file cleanermaster:master:60000.oldLogCleaner is stopped, won't delete any more files in:hdfs://iwgameNS/hbase/oldWALs 2015-12-28 11:20:09,802 WARN org.apache.hadoop.hbase.master.cleaner.CleanerChore: A file cleanermaster:master:60000.oldLogCleaner is stopped, won't delete any more files in:hdfs://iwgameNS/hbase/oldWALs 2015-12-28 11:21:09,990 WARN org.apache.hadoop.hbase.master.cleaner.CleanerChore: A file cleanermaster:master:60000.oldLogCleaner is stopped, won't delete any more files in:hdfs://iwgameNS/hbase/oldWALs I would like to know how to clean the floder? by manual or automatic? Any one has any suggestion? Any advice is appreciated. Thank you in advance.
Who Agreed with this topic