Support Questions

Find answers, ask questions, and share your expertise
Announcements
Welcome to the upgraded Community! Read this blog to see What’s New!

Ranger audit logs copy to a local folder

avatar

Ranger Audit logs for Hive/HDFS currently go to an HDFS folder. Format is json.

Is it possible to fork out a second copy to a (local) directory that gets cleaned in a short window (24 hr).?

How?

Thanks,

1 ACCEPTED SOLUTION

avatar
Expert Contributor

@luis marmolejo You can conifgure Ranger Audit to go to Log4J appender. In this way a copy can be sent to file as you needed. Configure these properties via Ambari for the respective components if you are using Ambari for managing.

1 ) You need to enable auditing to log4j appender by adding the following property to ranger-<component>-audit.xml

<property>

<name>xasecure.audit.log4j.is.enabled</name>

<value>true</value>

</property>

<property>

<name>xasecure.audit.destination.log4j</name>

<value>true</value>

</property>

<property>

<name>xasecure.audit.destination.log4j.logger</name>

<value>xaaudit</value>

</property>

2) Add the appender to the log4j.properties or log4j.xml file for the <component>

ranger.logger=INFO,console,RANGERAUDIT

log4j.logger.xaaudit=${ranger.logger}

log4j.appender.RANGERAUDIT=org.apache.log4j.DailyRollingFileAppender

log4j.appender.RANGERAUDIT.File=/tmp/ranger_hdfs_audit.log

log4j.appender.RANGERAUDIT.layout=org.apache.log4j.PatternLayout

log4j.appender.RANGERAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %L %m%n

log4j.appender.RANGERAUDIT.DatePattern=.yyyy-MM-dd

restart the respective component.

A copy of the Ranger Audit will be sent to /tmp/ranger_hdfs_audit.log ( in this case )

View solution in original post

13 REPLIES 13

avatar
Cloudera Employee

Hi @luis marmolejo, please see this article. It has a great breakdown of the Ranger audit framework. http://hortonworks.com/blog/apache-ranger-audit-framework/ The parameter you want is XAAUDIT.HDFS.LOCAL_ARCHIVE_DIRECTORY. This is the local directory where the audit log will be archived after it is moved to hdfs. I do not see any parameters to control periodic flushing of this directory.

avatar

Has this property been deleted or renamed in HDP 2.3?

There is the following props:

xasecure.audit.destination.db.batch.filespool.dir

xasecure.audit.destination.hdfs.batch.filespool.dir

avatar
Contributor

@Carter Everett and @luis marmolejo, the audit implementation has changed HDP 2.3 onwards. Previously the audits were written to local file and copied over to HDFS. From HDP 2.3 onwards, the audits are streamed directly to HDFS. It is written to the local spool folder only if the destination is not available.

avatar
Expert Contributor

@luis marmolejo You can conifgure Ranger Audit to go to Log4J appender. In this way a copy can be sent to file as you needed. Configure these properties via Ambari for the respective components if you are using Ambari for managing.

1 ) You need to enable auditing to log4j appender by adding the following property to ranger-<component>-audit.xml

<property>

<name>xasecure.audit.log4j.is.enabled</name>

<value>true</value>

</property>

<property>

<name>xasecure.audit.destination.log4j</name>

<value>true</value>

</property>

<property>

<name>xasecure.audit.destination.log4j.logger</name>

<value>xaaudit</value>

</property>

2) Add the appender to the log4j.properties or log4j.xml file for the <component>

ranger.logger=INFO,console,RANGERAUDIT

log4j.logger.xaaudit=${ranger.logger}

log4j.appender.RANGERAUDIT=org.apache.log4j.DailyRollingFileAppender

log4j.appender.RANGERAUDIT.File=/tmp/ranger_hdfs_audit.log

log4j.appender.RANGERAUDIT.layout=org.apache.log4j.PatternLayout

log4j.appender.RANGERAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %L %m%n

log4j.appender.RANGERAUDIT.DatePattern=.yyyy-MM-dd

restart the respective component.

A copy of the Ranger Audit will be sent to /tmp/ranger_hdfs_audit.log ( in this case )

avatar
Contributor

@Ramesh Mani, do we have references to enable this via Ambari? If we manually modify the config file, it will be overwritten next time Ambari restarts Ranger. This is assuming Ambari is used to manage the cluster.

avatar
Expert Contributor
@bdurai

I dont see an internal reference for this. We need to create one.

Your are right. we need to do the configuration changes via Amabri for respective components if Ambari is used.

avatar

@Ramesh Mani @bdurai There is something missing.

I applied the indicated properties on HDP Sandbox via ambari and restarted the components and I immediately see the file created (zero length).

Run some queries from Beeline, and the file is never appended !!! (and new ones created either).

I changed the date pattern to do it every minute (fragment from "Advanced hive-log4j" in ambari):

ranger.logger=INFO,console,RANGERAUDIT log4j.logger.xaaudit=${ranger.logger} log4j.appender.RANGERAUDIT=org.apache.log4j.DailyRollingFileAppender log4j.appender.RANGERAUDIT.File=/tmp/ranger_hdfs_audit.log log4j.appender.RANGERAUDIT.layout=org.apache.log4j.PatternLayout log4j.appender.RANGERAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %L %m%n log4j.appender.RANGERAUDIT.DatePattern='.'yyyy-MM-dd-HH-mm

Has anybody tried this configuration ?

avatar
Expert Contributor

@luis marmolejo Please check the permission of the file /tmp/ranger_hdfs_audit.log. Make sure it has rw permission for all others also. This is working fine.

avatar
Contributor
@Ramesh Mani

Is it possible to copy the ranger audit logs on to different server which is in the same network but outside of the cluster?

avatar
New Contributor

Hi,

Do we have any solution to the config file /etc/hadoop/conf/ranger-hdfs-audit.xml being overwritten by Ambari?

We are manually updating the file can we add the following configuration properties to Ambari?

 <property>
   <name>xasecure.audit.log4j.is.enabled</name>
   <value>true</value>
 </property>
 <property>
   <name>xasecure.audit.destination.log4j</name>
   <value>true</value>
 </property>
 <property>
   <name>xasecure.audit.destination.log4j.logger</name>
   <value>xaaudit</value>
 </property> 

@Don Bosco Durai

avatar
Contributor

@Calvin Pietersen

You can add the properties under "Custom ranger-hdfs-audit" section in Ambari custom-audit.png

avatar
New Contributor

Hi @Ramesh Mani , @Carter Everett

I am trying to send HDFS Ranger Audit logs to kafka via Log4j. I am using HDP 2.5 Sandbox where I have Ranger 0.6 and kafka 0.10.0.1.

I have added the below in Custom ranger_hdfs_audit using Ambari

  • xasecure.audit.destination.log4j=true
  • xasecure.audit.log4j.is.enabled
  • xasecure.audit.destination.log4j.logger=xaaudit

I have also added the below in Advanced hdfs-log4j using Ambari

#Kafka Appender

ranger.logger=INFO,console,KAFKA

log4j.logger.xaaudit=${ranger.logger}

log4j.appender.KAFKA=org.apache.kafka.log4jappender.KafkaLog4jAppender log4j.appender.KAFKA.layout=org.apache.log4j.PatternLayout

log4j.appender.KAFKA.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L %% - %m%n log4j.appender.KAFKA.BrokerList=sandbox.hortonworks.com:6667

log4j.appender.KAFKA.Topic=HDFS_AUDIT_LOG

log4j.appender.KAFKA.ProducerType=sync

and then restarted HDFS (namenode, Datanode and other dependencies) But now when i do a hdfs dfs -ls / , I get the below error:

[root@sandbox ~]# hdfs dfs -ls / log4j:ERROR Could not instantiate class [org.apache.kafka.log4jappender.KafkaLog4jAppender]. java.lang.ClassNotFoundException: org.apache.kafka.log4jappender.KafkaLog4jAppender at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.log4j.helpers.Loader.loadClass(Loader.java:198) at org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:327) at org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:124) at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:785) at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768) at org.apache.log4j.PropertyConfigurator.parseCatsAndRenderers(PropertyConfigurator.java:672) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:516) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580) at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526) at org.apache.log4j.LogManager.<clinit>(LogManager.java:127) at org.apache.log4j.Logger.getLogger(Logger.java:104) at org.apache.commons.logging.impl.Log4JLogger.getLogger(Log4JLogger.java:262) at org.apache.commons.logging.impl.Log4JLogger.<init>(Log4JLogger.java:108) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.commons.logging.impl.LogFactoryImpl.createLogFromClass(LogFactoryImpl.java:1025) at org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:790) at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:541) at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:292) at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:269) at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:657) at org.apache.hadoop.fs.FsShell.<clinit>(FsShell.java:43) log4j:ERROR Could not instantiate appender named "KAFKA".

Found 12 items

drwxrwxrwx - yarn hadoop 0 2016-10-25 08:10 /app-logs

drwxr-xr-x - hdfs hdfs 0 2016-10-25 07:54 /apps

drwxr-xr-x - yarn hadoop 0 2016-10-25 07:48 /ats

drwxr-xr-x - hdfs hdfs 0 2016-10-25 08:01 /demo and so on ....

Could you please help me on this.

Regards,

Vishnu Sure.

avatar
Contributor

@Ramesh Mani @vperiasamy

Ramesh I made the above changes. I got 6 diff logs in the /ranger/audit/hdfs/ directory in hdfs.

and also I'm unable to see the content in those log files --- I pasted the cat output of the log file.

Can you help me on this

hdfs dfs -cat /ranger/audit/hdfs/20180326/hdfs_ranger_audit_instance-1.c.neat-pagoda-198122.internal.1.log

cat: Cannot obtain block length for LocatedBlock{BP-211226024-10.224.60.23-1481061235494:blk_1091267231_17616185; getBlockSize()=1483776; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.224.60.21:50010,DS-d9d6b48a-2212-4529-a719-827215e3967a,DISK], DatanodeInfoWithStorage[10.224.60.22:50010,DS-04f30f6e-20b7-48af-9872-7d2782dff0ad,DISK], DatanodeInfoWithStorage[10.224.60.52:50010,DS-3f1ae50a-9ade-419f-9b39-fa3ac1d4f308,DISK]]}

hdfs@instance-1 ~]$ hdfs dfs -ls /ranger/audit/hdfs/20180326

Found 6 items
-rw-r--r--   3 hdfs hdfs    1419264 2018-03-26 23:56 /ranger/audit/hdfs/20180326/hdfs_ranger_audit_instance-1.c.neat-pagoda-198122.internal.1.log
-rw-r--r--   3 hdfs hdfs       1894 2018-03-26 22:44 /ranger/audit/hdfs/20180326/hdfs_ranger_audit_instance-1.c.neat-pagoda-198122.internal.2.log
-rw-r--r--   3 hdfs hdfs      59252 2018-03-26 22:56 /ranger/audit/hdfs/20180326/hdfs_ranger_audit_instance-1.c.neat-pagoda-198122.internal.3.log
-rw-r--r--   3 hdfs hdfs     580608 2018-03-27 00:59 /ranger/audit/hdfs/20180326/hdfs_ranger_audit_instance-1.c.neat-pagoda-198122.internal.4.log
-rw-r--r--   3 hdfs hdfs      29635 2018-03-26 23:58 /ranger/audit/hdfs/20180326/hdfs_ranger_audit_instance-1.c.neat-pagoda-198122.internal.5.log
-rw-r--r--   3 hdfs hdfs     193536 2018-03-26 17:43 /ranger/audit/hdfs/20180326/hdfs_ranger_audit_instance-1.c.neat-pagoda-198122.internal.log
Labels