Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Ranger audit logs copy to a local folder

avatar
Contributor

Ranger Audit logs for Hive/HDFS currently go to an HDFS folder. Format is json.

Is it possible to fork out a second copy to a (local) directory that gets cleaned in a short window (24 hr).?

How?

Thanks,

1 ACCEPTED SOLUTION

avatar
Super Collaborator
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login
13 REPLIES 13

avatar
Cloudera Employee

Hi @luis marmolejo, please see this article. It has a great breakdown of the Ranger audit framework. http://hortonworks.com/blog/apache-ranger-audit-framework/ The parameter you want is XAAUDIT.HDFS.LOCAL_ARCHIVE_DIRECTORY. This is the local directory where the audit log will be archived after it is moved to hdfs. I do not see any parameters to control periodic flushing of this directory.

avatar
Contributor

Has this property been deleted or renamed in HDP 2.3?

There is the following props:

xasecure.audit.destination.db.batch.filespool.dir

xasecure.audit.destination.hdfs.batch.filespool.dir

avatar
Rising Star

@Carter Everett and @luis marmolejo, the audit implementation has changed HDP 2.3 onwards. Previously the audits were written to local file and copied over to HDFS. From HDP 2.3 onwards, the audits are streamed directly to HDFS. It is written to the local spool folder only if the destination is not available.

avatar
Super Collaborator
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login

avatar
Rising Star

@Ramesh Mani, do we have references to enable this via Ambari? If we manually modify the config file, it will be overwritten next time Ambari restarts Ranger. This is assuming Ambari is used to manage the cluster.

avatar
Super Collaborator
@bdurai

I dont see an internal reference for this. We need to create one.

Your are right. we need to do the configuration changes via Amabri for respective components if Ambari is used.

avatar
Contributor

@Ramesh Mani @bdurai There is something missing.

I applied the indicated properties on HDP Sandbox via ambari and restarted the components and I immediately see the file created (zero length).

Run some queries from Beeline, and the file is never appended !!! (and new ones created either).

I changed the date pattern to do it every minute (fragment from "Advanced hive-log4j" in ambari):

ranger.logger=INFO,console,RANGERAUDIT log4j.logger.xaaudit=${ranger.logger} log4j.appender.RANGERAUDIT=org.apache.log4j.DailyRollingFileAppender log4j.appender.RANGERAUDIT.File=/tmp/ranger_hdfs_audit.log log4j.appender.RANGERAUDIT.layout=org.apache.log4j.PatternLayout log4j.appender.RANGERAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %L %m%n log4j.appender.RANGERAUDIT.DatePattern='.'yyyy-MM-dd-HH-mm

Has anybody tried this configuration ?

avatar
Super Collaborator

@luis marmolejo Please check the permission of the file /tmp/ranger_hdfs_audit.log. Make sure it has rw permission for all others also. This is working fine.

avatar
Contributor
@Ramesh Mani

Is it possible to copy the ranger audit logs on to different server which is in the same network but outside of the cluster?