Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

May i delete /var/log/hadoop/hdfs files

Solved Go to solution

May i delete /var/log/hadoop/hdfs files

New Contributor

Hi,

there are many log files in /var/log/hadoop/hdfs, and file name like hdfs-audit.log.2018-06-01,

and some file name like hadoop-hdfs-namenode-hostname are very large.

may i delete it all?

(this host running namenode, ranger, datanode flume and node manager.)

and if any configure can setting those log as auto cycling or auto delete?

Thanks

1 ACCEPTED SOLUTION

Accepted Solutions

Re: May i delete /var/log/hadoop/hdfs files

Super Mentor

@Sen Ke

Logs for certain period is always useful to retain so that in case of any analysis we have the logs to verify if anything went unexpected.

Some of the logs like "hdfs-audit.log" are important as they contains all the auditing data of HDFS access.

However if you want to delete old data Logs then you can delete them as it wont cause any service interruption.

But the best approach will be to implement the Log4j Extra functionality for your logging so that the old logs will be automatically compressed and saved + rolled. As those logs are basically Text files so the compression happens greatly and compressed log size is 10-15 times lower than the original logs.

Please refer to the following article to know more about it:

https://community.hortonworks.com/articles/50058/using-log4j-extras-how-to-rotate-as-well-as-zip-th....

2 REPLIES 2

Re: May i delete /var/log/hadoop/hdfs files

Super Mentor

@Sen Ke

Logs for certain period is always useful to retain so that in case of any analysis we have the logs to verify if anything went unexpected.

Some of the logs like "hdfs-audit.log" are important as they contains all the auditing data of HDFS access.

However if you want to delete old data Logs then you can delete them as it wont cause any service interruption.

But the best approach will be to implement the Log4j Extra functionality for your logging so that the old logs will be automatically compressed and saved + rolled. As those logs are basically Text files so the compression happens greatly and compressed log size is 10-15 times lower than the original logs.

Please refer to the following article to know more about it:

https://community.hortonworks.com/articles/50058/using-log4j-extras-how-to-rotate-as-well-as-zip-th....

Re: May i delete /var/log/hadoop/hdfs files

New Contributor

@Jay Kumar SenSharma

Thanks for your reply, i deleted some logs to recover Namenode service status to running (service can't start when log space full),

and my logs over 50GB per year, so i will study log4j extra to save space usage.

Don't have an account?
Coming from Hortonworks? Activate your account here