Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Problem setting up SYSLOG appender for Namenode AUDIT log

avatar
Explorer

I posted this to the google group but I think most people are here instead.

 

Here it is : 

 

Hello,

I'm having issues trying to have HDFS audit logs forwarded to SYSLOG (rsyslogd)

Running CDH 5.3.1 under CM 5.3.3 on Centos 6.5

 

Here's what I've done :

 

Step #1 :

 

In the Namenode logging safety valve, I added the following :

 

log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=INFO,RFAAUDIT,SYSLOG
log4j.appender.SYSLOG=org.apache.log4j.net.SyslogAppender
log4j.appender.SYSLOG.syslogHost=localhost
log4j.appender.SYSLOG.layout=org.apache.log4j.PatternLayout
log4j.appender.SYSLOG.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
log4j.appender.SYSLOG.Facility=LOCAL1

 

That did not make it work.

 

Step #2:

 

Digging a bit more, I found that under the "processes" tab, there was a list of Env variables. One of them being

 

HADOOP_AUDIT_LOGGER=INFO,RFAAUDIT

 

To change this, I changed the HDFS Service Environment Advanced Configuration Snippet (Service Wide) to :

 

HADOOP_AUDIT_LOGGER=INFO,RFAAUDIT,SYSLOG

 

Still not working.

 

I confirmed that both safety valves were set by checking the stderr.out which prints : 

+ HADOOP_OPTS='-Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ export 'HADOOP_OPTS=-Dhdfs.audit.logger=INFO,RFAAUDIT,SYSLOG -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ HADOOP_OPTS='-Dhdfs.audit.logger=INFO,RFAAUDIT,SYSLOG -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '

The Processes Tab also show this : "HADOOP_AUDIT_LOGGER=INFO,RFAAUDIT,SYSLOG"

 

And the log4j.properties contains my lines :

 

log.threshold=INFO
main.logger=RFA
hadoop.root.logger=${log.threshold},${main.logger}
log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter
log4j.rootLogger=${hadoop.root.logger},EventCounter,EventCatcher
log.dir=/var/log/hadoop-hdfs
log.file=hadoop-cmf-hdfs1-NAMENODE-[MYHOSTNAME].log.out
max.log.file.size=200MB
max.log.file.backup.index=10
log4j.appender.RFA=org.apache.log4j.RollingFileAppender
log4j.appender.RFA.File=${log.dir}/${log.file}
log4j.appender.RFA.layout=org.apache.log4j.PatternLayout
log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
log4j.appender.RFA.MaxFileSize=${max.log.file.size}
log4j.appender.RFA.MaxBackupIndex=${max.log.file.backup.index}
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
log4j.logger.org.apache.hadoop.fs.FSNamesystem.audit=WARN
log4j.logger.org.jets3t.service.impl.rest.httpclient.RestS3Service=ERROR
log4j.appender.NullAppender=org.apache.log4j.varia.NullAppender
log4j.logger.com.cloudera.cmf.event.shaded.org.apache.avro.ipc=FATAL
log4j.appender.EventCatcher=com.cloudera.cmf.eventcatcher.client.logs.ExceptionForwarderAppender
log4j.appender.EventCatcher.serviceType=HDFS
log4j.appender.EventCatcher.filterConfigFile=event-filter-rules.json
log4j.appender.EventCatcher.service=hdfs1
log4j.appender.EventCatcher.roleInstance=hdfs1-NAMENODE-e63c6c50ca428fc1e6b21be95515a3d4
log4j.appender.EventCatcher.role=NAMENODE
log4j.appender.EventCatcher.hostId=be0de0af-b6bc-4f71-b073-ba55f836a382
log4j.appender.EventCatcher.eventServerPort=7184
log4j.appender.EventCatcher.instanceHost=[MYHOSTNAME]
log4j.appender.EventCatcher.eventServerHost=[EVENTSERVER_HOSTNAME]
log4j.appender.EventCatcher.retryInterval=30
hdfs.audit.logger=${log.threshold},RFAAUDIT
hdfs.audit.log.maxfilesize=256MB
hdfs.audit.log.maxbackupindex=20
log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender
log4j.appender.RFAAUDIT.File=${log.dir}/hdfs-audit.log
log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.RFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
log4j.appender.RFAAUDIT.MaxFileSize=${hdfs.audit.log.maxfilesize}
log4j.appender.RFAAUDIT.MaxBackupIndex=${hdfs.audit.log.maxbackupindex}
hadoop.security.logger=INFO,NullAppender
hadoop.security.log.maxfilesize=256MB
hadoop.security.log.maxbackupindex=20
log4j.category.SecurityLogger=${hadoop.security.logger}
log4j.additivity.SecurityLogger=false
hadoop.security.log.file=SecurityAuth-${user.name}.audit
log4j.appender.RFAS=org.apache.log4j.RollingFileAppender
log4j.appender.RFAS.File=${log.dir}/${hadoop.security.log.file}
log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout
log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
log4j.appender.RFAS.MaxFileSize=${hadoop.security.log.maxfilesize}
log4j.appender.RFAS.MaxBackupIndex=${hadoop.security.log.maxbackupindex}
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=INFO,RFAAUDIT,SYSLOG
log4j.appender.SYSLOG=org.apache.log4j.net.SyslogAppender
log4j.appender.SYSLOG.syslogHost=localhost
log4j.appender.SYSLOG.layout=org.apache.log4j.PatternLayout
log4j.appender.SYSLOG.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
log4j.appender.SYSLOG.Facility=LOCAL1

 


My rsyslog.conf contains this :

 

$template hdfsAuditLogs,"/var/log/%$YEAR%/%$MONTH%/%$DAY%/hdfsaudit.log"
local1.* -?hdfsEditLogs

 

Testing it in python works (the local1 facility)


Any help would be welcome. I have no clue why syslog is not working...

Thanks

1 ACCEPTED SOLUTION

avatar
Explorer
Everything is working.  Here's the final config I used :


In the logging safety valve of the namenode service

hdfs.audit.logger=${log.threshold},RFAAUDIT,SYSLOG
log4j.appender.SYSLOG=org.apache.log4j.net.SyslogAppender
log4j.appender.SYSLOG.syslogHost=localhost
log4j.appender.SYSLOG.layout=org.apache.log4j.PatternLayout
log4j.appender.SYSLOG.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
log4j.appender.SYSLOG.Facility=LOCAL1


And the Env Variable Safety Valve (System Wide)

 
HADOOP_AUDIT_LOGGER=INFO,RFAAUDIT,SYSLOG
 
Thx for the help.
 
This case is closed 😄

View solution in original post

2 REPLIES 2

avatar
Explorer

I found the problem.

 

Turns out that the Log4J SYSLOG appender uses UDP and by default, rsyslog does not have UDP enabled.

 

I added this to the rsyslog.conf and it works for a sample Java app I made.  Now I need to make HDFS work but first I'll start clean to make sure I get it right.

 

$ModLoad imudp
$UDPServerRun 514

avatar
Explorer
Everything is working.  Here's the final config I used :


In the logging safety valve of the namenode service

hdfs.audit.logger=${log.threshold},RFAAUDIT,SYSLOG
log4j.appender.SYSLOG=org.apache.log4j.net.SyslogAppender
log4j.appender.SYSLOG.syslogHost=localhost
log4j.appender.SYSLOG.layout=org.apache.log4j.PatternLayout
log4j.appender.SYSLOG.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
log4j.appender.SYSLOG.Facility=LOCAL1


And the Env Variable Safety Valve (System Wide)

 
HADOOP_AUDIT_LOGGER=INFO,RFAAUDIT,SYSLOG
 
Thx for the help.
 
This case is closed 😄