Support Questions

Find answers, ask questions, and share your expertise

Ranger Audit Log (Add filter)

avatar
Explorer

We had a HDF Cluster with customized settings to route all the audit log to file system

Long term plan is to feed the logs to central platform, but currently these logs are sitting at remote site, and the normal kafka message reading audit log is filling up the server disk very quickly.

-rw-r--r-- 1 kafka hadoop 20067988960 Jan  3 16:28 ranger_kafka_audit.log 

# see the difference of the size after a few min. (almost 2G in 7 min)
-rw-r--r-- 1 kafka hadoop 18363271828 Jan  3 16:21 ranger_kafka_audit.log

We are looking for a temp solution to only write the audit log with "forbidden" tag to the file.

Does anyone have idea of customize the configuration so that we can control the content to be logged?

# Turn on ranger kafka audit log
log4j.appender.RANGER_AUDIT=org.apache.log4j.DailyRollingFileAppender
log4j.appender.RANGER_AUDIT.File=${kafka.logs.dir}/ranger_kafka_audit.log
log4j.appender.RANGER_AUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.RANGER_AUDIT.layout.ConversionPattern=%m%n
log4j.appender.RANGER_AUDIT.MaxFileSize=100MB
log4j.appender.R.MaxBackupIndex=30
log4j.logger.ranger.audit=INFO,RANGER_AUDIT
1 ACCEPTED SOLUTION

avatar
Super Collaborator

Hi @Rachel Rui Liu,

This can perform this with two solutions.

1. Using the log back filter mechanism,

For the Audit logs which has forbidden access -> you can see “result”:1 in the response.

Which mean we can configure the log back settings in nifi properties (where as log4j in kafka ).

Here I am giving the code snippet for the same ( may need to modify accordingly)

<filter class="ch.qos.logback.core.filter.EvaluatorFilter">

<evaluator> <!-- defaults to type ch.qos.logback.classic.boolex.JaninoEventEvaluator -->

<expression>return message.contains('"result":1');</expression>

</evaluator>

<OnMismatch>DENY</OnMismatch>

<OnMatch>NEUTRAL</OnMatch>

</filter>

so your nifi-node-logback-env file will have the following snippet

<appender name="RANGER_AUDIT" class="ch.qos.logback.core.rolling.RollingFileAppender">

<file>${org.apache.nifi.bootstrap.config.log.dir}/ranger_nifi_audit.log</file>

<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">

<fileNamePattern>${org.apache.nifi.bootstrap.config.log.dir}/ranger_nifi_audit_%d{yyyy-MM-dd_HH}.%i.log</fileNamePattern>

<maxFileSize>100MB</maxFileSize>

<maxHistory>30</maxHistory>

</rollingPolicy>

<immediateFlush>true</immediateFlush>

<filter class="ch.qos.logback.core.filter.EvaluatorFilter">

<evaluator> <!-- defaults to type ch.qos.logback.classic.boolex.JaninoEventEvaluator -->

<expression>return message.contains('"result":1');</expression>

</evaluator>

<OnMismatch>DENY</OnMismatch>

<OnMatch>NEUTRAL</OnMatch>

</filter>

<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">

<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>

</encoder>

</appender>

in case of log4j that would be regular expression filter

<RegexFilter regex=".*\"result\" \: 1.*" onMatch="ACCEPT" onMismatch="DENY"/>

More on This can be found at log4j and logback

2. Using the out of the box solution with simple shell script whchi will grep the result:1 lines and remev rest of all on periodic interval

sed '/”result”:1/!d' <logfile>

Hope this helps !!

View solution in original post

1 REPLY 1

avatar
Super Collaborator

Hi @Rachel Rui Liu,

This can perform this with two solutions.

1. Using the log back filter mechanism,

For the Audit logs which has forbidden access -> you can see “result”:1 in the response.

Which mean we can configure the log back settings in nifi properties (where as log4j in kafka ).

Here I am giving the code snippet for the same ( may need to modify accordingly)

<filter class="ch.qos.logback.core.filter.EvaluatorFilter">

<evaluator> <!-- defaults to type ch.qos.logback.classic.boolex.JaninoEventEvaluator -->

<expression>return message.contains('"result":1');</expression>

</evaluator>

<OnMismatch>DENY</OnMismatch>

<OnMatch>NEUTRAL</OnMatch>

</filter>

so your nifi-node-logback-env file will have the following snippet

<appender name="RANGER_AUDIT" class="ch.qos.logback.core.rolling.RollingFileAppender">

<file>${org.apache.nifi.bootstrap.config.log.dir}/ranger_nifi_audit.log</file>

<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">

<fileNamePattern>${org.apache.nifi.bootstrap.config.log.dir}/ranger_nifi_audit_%d{yyyy-MM-dd_HH}.%i.log</fileNamePattern>

<maxFileSize>100MB</maxFileSize>

<maxHistory>30</maxHistory>

</rollingPolicy>

<immediateFlush>true</immediateFlush>

<filter class="ch.qos.logback.core.filter.EvaluatorFilter">

<evaluator> <!-- defaults to type ch.qos.logback.classic.boolex.JaninoEventEvaluator -->

<expression>return message.contains('"result":1');</expression>

</evaluator>

<OnMismatch>DENY</OnMismatch>

<OnMatch>NEUTRAL</OnMatch>

</filter>

<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">

<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>

</encoder>

</appender>

in case of log4j that would be regular expression filter

<RegexFilter regex=".*\"result\" \: 1.*" onMatch="ACCEPT" onMismatch="DENY"/>

More on This can be found at log4j and logback

2. Using the out of the box solution with simple shell script whchi will grep the result:1 lines and remev rest of all on periodic interval

sed '/”result”:1/!d' <logfile>

Hope this helps !!