Created on 09-22-2018 11:32 AM - edited 09-16-2022 06:44 AM
Hello everyone,
I'm trying to configure Log4j for HDFS Namenode and Security Audit logs. In either way Size based rotation or Daily Rotation I should be able to rotate and delete the logs. However It is not working as expected.
Can someone send me the best configurations in Log4j for HDFS NN/Audit Logs.Attaching the Log4j configurations .
Help me to delete the logs once it reached 10 Days .I would like to Gzip the older files. Please find the below screenshot for the size consumed on the disk
We would like to remove Logs based on Size for Audit logs .
Created 09-23-2018 06:26 AM
Please refer the steps mentioned in this community article https://community.hortonworks.com/content/supportkb/150088/how-to-enable-hdfs-audit-log-rotation-and...
Created 09-24-2018 04:30 PM
@Raj ji Did the above mentioned article helped in addressing your query? If yes, Could you please login and mark this answer as "Accept" to close this thread.
Created 09-24-2018 09:18 PM
Looks like a ditto HCC thread with the exact same query here. Also the screenshot posted to the HCC thread is also same.
Created 09-24-2018 11:21 PM
Looks like this thread is Older than the Other Duplicate Thread link mentioned above hence posting my update from the other HCC thread so that other thread can be deleted.
If you want to Rotate as well as compress your Logs (like Audit log) then you can make use of "RollingFileAppender" (instead of using DailyRollingFileAppender. Because with "RollingFileAppender" you get more options to rotate the logs based on various policies like "TimeBasedRollingPolicy" and you can also compress the files "log.gz"
Please refer to the following example for more details: https://community.hortonworks.com/articles/50058/using-log4j-extras-how-to-rotate-as-well-as-zip-th....
TimeBasedTriggeringPolicy
hdfs.audit.logger=WARN,console log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger} log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false log4j.appender.DRFAAUDIT=org.apache.log4j.rolling.RollingFileAppender log4j.appender.DRFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log log4j.appender.DRFAAUDIT.layout=org.apache.log4j.PatternLayout log4j.appender.DRFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n log4j.appender.DRFAAUDIT.DatePattern=.yyyy-MM-dd log4j.appender.DRFAAUDIT.rollingPolicy=org.apache.log4j.rolling.TimeBasedRollingPolicy log4j.appender.DRFAAUDIT.rollingPolicy.ActiveFileName=${hadoop.log.dir}/${hadoop.log.file} log4j.appender.DRFAAUDIT.rollingPolicy.FileNamePattern=${hadoop.log.dir}/${hadoop.log.file}-.%d{yyyyMMdd}.log.gz
.
Please make sure to copy the "apache-log4j-extras-1.2.17.jar" files inside the /usr/hdp/x.x.x.x.x/hadoop/lib/ directory as mentioned in the above article Followed by restart of all required services.
.
.
Similarly "SizeBasedTriggeringPolicy" can be used as following:
hdfs.audit.logger=WARN,console log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger} log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false log4j.appender.DRFAAUDIT=org.apache.log4j.rolling.RollingFileAppender log4j.appender.DRFAAUDIT.rollingPolicy=org.apache.log4j.rolling.FixedWindowRollingPolicy log4j.appender.DRFAAUDIT.rollingPolicy.maxIndex=10 log4j.appender.DRFAAUDIT.rollingPolicy.ActiveFileName=${hadoop.log.dir}/hdfs-audit.log log4j.appender.DRFAAUDIT.rollingPolicy.FileNamePattern=${hadoop.log.dir}/hdfs-audit.log-%i.gz log4j.appender.DRFAAUDIT.triggeringPolicy=org.apache.log4j.rolling.SizeBasedTriggeringPolicy log4j.appender.DRFAAUDIT.triggeringPolicy.MaxFileSize=10485760 log4j.appender.DRFAAUDIT.layout=org.apache.log4j.PatternLayout log4j.appender.DRFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n<br>
Please
change the value of "log4j.appender.file.triggeringPolicy.MaxFileSize"
according to your requirement here the value "10485760" is around 10MB.
Reference: https://community.hortonworks.com/questions/212567/log4g-logs-not-rotated-and-zipped.html