- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
May i delete /var/log/hadoop/hdfs files
- Labels:
-
Apache Hadoop
Created ‎01-21-2019 02:39 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
there are many log files in /var/log/hadoop/hdfs, and file name like hdfs-audit.log.2018-06-01,
and some file name like hadoop-hdfs-namenode-hostname are very large.
may i delete it all?
(this host running namenode, ranger, datanode flume and node manager.)
and if any configure can setting those log as auto cycling or auto delete?
Thanks
Created ‎01-21-2019 09:49 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Logs for certain period is always useful to retain so that in case of any analysis we have the logs to verify if anything went unexpected.
Some of the logs like "hdfs-audit.log" are important as they contains all the auditing data of HDFS access.
However if you want to delete old data Logs then you can delete them as it wont cause any service interruption.
But the best approach will be to implement the Log4j Extra functionality for your logging so that the old logs will be automatically compressed and saved + rolled. As those logs are basically Text files so the compression happens greatly and compressed log size is 10-15 times lower than the original logs.
Please refer to the following article to know more about it:
Created ‎01-21-2019 09:49 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Logs for certain period is always useful to retain so that in case of any analysis we have the logs to verify if anything went unexpected.
Some of the logs like "hdfs-audit.log" are important as they contains all the auditing data of HDFS access.
However if you want to delete old data Logs then you can delete them as it wont cause any service interruption.
But the best approach will be to implement the Log4j Extra functionality for your logging so that the old logs will be automatically compressed and saved + rolled. As those logs are basically Text files so the compression happens greatly and compressed log size is 10-15 times lower than the original logs.
Please refer to the following article to know more about it:
Created ‎01-22-2019 07:28 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for your reply, i deleted some logs to recover Namenode service status to running (service can't start when log space full),
and my logs over 50GB per year, so i will study log4j extra to save space usage.
