Support Questions

Find answers, ask questions, and share your expertise
Announcements
Check out our newest addition to the community, the Cloudera Data Analytics (CDA) group hub.

How to rotate and archive hdfs-audit log file

Contributor

Hi Team,

I want to rotate and archive(in .gz) hdfs-audit log files on size based but after reaching 350KB of size, the file is not getting archived. The properties I have set in hdfs-log4j is:

hdfs.audit.logger=INFO,console
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
#log4j.appender.DRFAAUDIT=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFAAUDIT.triggeringPolicy=org.apache.log4j.rolling.SizeBasedTriggeringPolicy
log4j.appender.DRFAAUDIT=org.apache.log4j.RollingFileAppender
log4j.appender.DRFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
log4j.appender.DRFAAUDIT.rollingPolicy.FileNamePattern=hdfs-audit-%d{yyyy-MM}.gz
log4j.appender.DRFAAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
log4j.appender.DRFAAUDIT.DatePattern=.yyyy-MM-dd
log4j.appender.DRFAAUDIT.MaxFileSize=350KB
log4j.appender.DRFAAUDIT.MaxBackupIndex=9

Any help will be highly appreciated.

1 ACCEPTED SOLUTION

Contributor

Hi Team and @Sagar Shimpi,

The below steps helped me to resolve the issue.

1) As I am using HDP-2.4.2, so I need to download the jar from http://www.apache.org/dyn/closer.cgi/logging/log4j/extras/1.2.17/apache-log4j-extras-1.2.17-bin.tar....

2) Extract the tar file and copy apache-log4j-extras-1.2.17.jar to ALL the cluster nodes in /usr/hdp/<version>/hadoop-hdfs/lib location.

Note: Also you can find apache-log4j-extras-1.2.17.jar in /usr/hdp/<version>/hive/lib folder. I found it later.

3) Then edit in advanced hdfs-log4j property from ambari and replace the default hdfs-audit log4j properties as

hdfs.audit.logger=INFO,console
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
log4j.appender.DRFAAUDIT=org.apache.log4j.rolling.RollingFileAppender
log4j.appender.DRFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
log4j.appender.DRFAAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c: %m%
log4j.appender.DRFAAUDIT.rollingPolicy=org.apache.log4j.rolling.FixedWindowRollingPolicy
log4j.appender.DRFAAUDIT.rollingPolicy.maxIndex=30
log4j.appender.DRFAAUDIT.triggeringPolicy=org.apache.log4j.rolling.SizeBasedTriggeringPolicy
log4j.appender.DRFAAUDIT.triggeringPolicy.MaxFileSize=16106127360
## The figure 16106127360 is in bytes which is equal to 15GB ##
log4j.appender.DRFAAUDIT.rollingPolicy.ActiveFileName=${hadoop.log.dir}/hdfs-audit.log
log4j.appender.DRFAAUDIT.rollingPolicy.FileNamePattern=${hadoop.log.dir}/hdfs-audit-%i.log.gz

The output of the hdfs audit log files in .gz are:

-rw-r--r-- 1 hdfs hadoop 384M Jan 28 23:51 hdfs-audit-2.log.gz
-rw-r--r-- 1 hdfs hadoop 347M Jan 29 07:40 hdfs-audit-1.log.gz

View solution in original post

12 REPLIES 12

Contributor

@Sagar Shimpi

I checked the above links but I didn't find how to compress and zip the log file automatically if it reaches the specified MaxFileSize. I need to compress the log files and keep it upto 30days which after then should get deleted automatically. So what are the additional properties should I need to add to make .gz files for hdfs-audit logs??

At present my property is set as:

hdfs.audit.logger=INFO,console
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
log4j.appender.DRFAAUDIT=org.apache.log4j.RollingFileAppender
log4j.appender.DRFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
log4j.appender.DRFAAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
log4j.appender.DRFAAUDIT.DatePattern=.yyyy-MM-dd
log4j.appender.DRFAAUDIT.MaxFileSize=1GB
log4j.appender.DRFAAUDIT.MaxBackupIndex=30

@Rahul Buragohain

It should be RFAAUDIT instead of DRFAAUDIT. Please check screenshot below -

11546-rfa.jpg

Please check link for more details -http://apprize.info/security/hadoop/7.html

Contributor

Hi Team and @Sagar Shimpi,

The below steps helped me to resolve the issue.

1) As I am using HDP-2.4.2, so I need to download the jar from http://www.apache.org/dyn/closer.cgi/logging/log4j/extras/1.2.17/apache-log4j-extras-1.2.17-bin.tar....

2) Extract the tar file and copy apache-log4j-extras-1.2.17.jar to ALL the cluster nodes in /usr/hdp/<version>/hadoop-hdfs/lib location.

Note: Also you can find apache-log4j-extras-1.2.17.jar in /usr/hdp/<version>/hive/lib folder. I found it later.

3) Then edit in advanced hdfs-log4j property from ambari and replace the default hdfs-audit log4j properties as

hdfs.audit.logger=INFO,console
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
log4j.appender.DRFAAUDIT=org.apache.log4j.rolling.RollingFileAppender
log4j.appender.DRFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
log4j.appender.DRFAAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c: %m%
log4j.appender.DRFAAUDIT.rollingPolicy=org.apache.log4j.rolling.FixedWindowRollingPolicy
log4j.appender.DRFAAUDIT.rollingPolicy.maxIndex=30
log4j.appender.DRFAAUDIT.triggeringPolicy=org.apache.log4j.rolling.SizeBasedTriggeringPolicy
log4j.appender.DRFAAUDIT.triggeringPolicy.MaxFileSize=16106127360
## The figure 16106127360 is in bytes which is equal to 15GB ##
log4j.appender.DRFAAUDIT.rollingPolicy.ActiveFileName=${hadoop.log.dir}/hdfs-audit.log
log4j.appender.DRFAAUDIT.rollingPolicy.FileNamePattern=${hadoop.log.dir}/hdfs-audit-%i.log.gz

The output of the hdfs audit log files in .gz are:

-rw-r--r-- 1 hdfs hadoop 384M Jan 28 23:51 hdfs-audit-2.log.gz
-rw-r--r-- 1 hdfs hadoop 347M Jan 29 07:40 hdfs-audit-1.log.gz

New Contributor

Can you confirm if the below settings have achieved the purpose in subject?

  1. hdfs.audit.logger=INFO,console
  2. log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
  3. log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
  4. log4j.appender.DRFAAUDIT=org.apache.log4j.rolling.RollingFileAppender
  5. log4j.appender.DRFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
  6. log4j.appender.DRFAAUDIT.layout=org.apache.log4j.PatternLayout
  7. log4j.appender.DRFAAUDIT.layout.ConversionPattern=%d{ISO8601}%p %c:%m%
  8. log4j.appender.DRFAAUDIT.rollingPolicy=org.apache.log4j.rolling.FixedWindowRollingPolicy
  9. log4j.appender.DRFAAUDIT.rollingPolicy.maxIndex=30
  10. log4j.appender.DRFAAUDIT.triggeringPolicy=org.apache.log4j.rolling.SizeBasedTriggeringPolicy
  11. log4j.appender.DRFAAUDIT.triggeringPolicy.MaxFileSize=16106127360
  12. ## The figure 16106127360 is in bytes which is equal to 15GB ##
  13. log4j.appender.DRFAAUDIT.rollingPolicy.ActiveFileName=${hadoop.log.dir}/hdfs-audit.log
  14. log4j.appender.DRFAAUDIT.rollingPolicy.FileNamePattern=${hadoop.log.dir}/hdfs-audit-%i.log.gz

Contributor

@English Rose

Yes I confirm that the settings which I mentioned is working properly.

New Contributor

Thanks @Rahul Buragohain few more clarifications pls

1. If i want to avoid the 30 day limit I should not be using FixedWindowRollingPolicy/max Index and SizeBasedTriggeringPolic/Max file size if am correct?

2. I know am being a bit greedy here, do you know of any other inbuilt process to archive/move the the zipped logs to HDFS/other location instead of deleting them(yes I can write a script , but checking if I am missing on any existing appenders/policies

New Contributor

@Rahul Buragohain also could you please explain about rolling.RollingFileAppender?

New Contributor

Hi i am trying to get the gip audit hdfs logs . I am getting the logs for every hour but they are not in .gz ? may i know if i need to correct something . and if you can provide some explanation on it , i would really appreciate your help:

hdfs.audit.logger=INFO,console log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger} log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false log4j.appender.DRFAAUDIT=org.apache.log4j.DailyRollingFileAppender log4j.appender.DRFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log log4j.appender.DRFAAUDIT.layout=org.apache.log4j.PatternLayout log4j.appender.DRFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n log4j.appender.DRFAAUDIT.DatePattern=.yyyy-MM-dd-HH log4j.appender.DRFAAUDIT.rollingPolicy=org.apache.log4j.rolling.FixedWindowRollingPolicy log4j.appender.DRFAAUDIT.rollingPolicy.maxIndex=2 log4j.appender.DRFAAUDIT.triggeringPolicy=org.apache.log4j.rolling.SizeBasedTriggeringPolicy log4j.appender.DRFAAUDIT.triggeringPolicy.MaxFileSize=500MB log4j.appender.DRFAAUDIT.rollingPolicy.ActiveFileName=${hadoop.log.dir}/hdfs-audit.log log4j.appender.DRFAAUDIT.rollingPolicy.FileNamePattern=${hadoop.log.dir}/hdfs-audit-%i.log.gz

Explorer

will the log4j configuration be reflected automatically?
or do i need to restart the process

Explorer

i did following changes,but hdfs-audit logs are not rotating,

hdfs.audit.logger=INFO,console

log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}

log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false

#log4j.appender.DRFAAUDIT=org.apache.log4j.DailyRollingFileAppender

log4j.appender.DRFAAUDIT=org.apache.log4j.RollingFileAppender

log4j.appender.DRFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log

log4j.appender.DRFAAUDIT.layout=org.apache.log4j.PatternLayout

log4j.appender.DRFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n

log4j.appender.DRFAAUDIT.DatePattern=.yyyy-MM-dd

log4j.appender.DRFAAUDIT.MaxFileSize=100MB

log4j.appender.DRFAAUDIT.MaxBackupIndex=5

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.