- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Ranger audit logs copy to a local folder
- Labels:
-
Apache Hadoop
-
Apache Hive
-
Apache Ranger
Created ‎06-01-2016 02:00 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Ranger Audit logs for Hive/HDFS currently go to an HDFS folder. Format is json.
Is it possible to fork out a second copy to a (local) directory that gets cleaned in a short window (24 hr).?
How?
Thanks,
Created ‎06-02-2016 07:09 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@luis marmolejo You can conifgure Ranger Audit to go to Log4J appender. In this way a copy can be sent to file as you needed. Configure these properties via Ambari for the respective components if you are using Ambari for managing.
1 ) You need to enable auditing to log4j appender by adding the following property to ranger-<component>-audit.xml
<property>
<name>xasecure.audit.log4j.is.enabled</name>
<value>true</value>
</property>
<property>
<name>xasecure.audit.destination.log4j</name>
<value>true</value>
</property>
<property>
<name>xasecure.audit.destination.log4j.logger</name>
<value>xaaudit</value>
</property>
2) Add the appender to the log4j.properties or log4j.xml file for the <component>
ranger.logger=INFO,console,RANGERAUDIT
log4j.logger.xaaudit=${ranger.logger}
log4j.appender.RANGERAUDIT=org.apache.log4j.DailyRollingFileAppender
log4j.appender.RANGERAUDIT.File=/tmp/ranger_hdfs_audit.log
log4j.appender.RANGERAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.RANGERAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %L %m%n
log4j.appender.RANGERAUDIT.DatePattern=.yyyy-MM-dd
restart the respective component.
A copy of the Ranger Audit will be sent to /tmp/ranger_hdfs_audit.log ( in this case )
Created ‎06-01-2016 02:16 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @luis marmolejo, please see this article. It has a great breakdown of the Ranger audit framework. http://hortonworks.com/blog/apache-ranger-audit-framework/ The parameter you want is XAAUDIT.HDFS.LOCAL_ARCHIVE_DIRECTORY. This is the local directory where the audit log will be archived after it is moved to hdfs. I do not see any parameters to control periodic flushing of this directory.
Created ‎06-01-2016 06:31 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Has this property been deleted or renamed in HDP 2.3?
There is the following props:
xasecure.audit.destination.db.batch.filespool.dir
xasecure.audit.destination.hdfs.batch.filespool.dir
Created ‎06-02-2016 07:25 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Carter Everett and @luis marmolejo, the audit implementation has changed HDP 2.3 onwards. Previously the audits were written to local file and copied over to HDFS. From HDP 2.3 onwards, the audits are streamed directly to HDFS. It is written to the local spool folder only if the destination is not available.
Created ‎06-02-2016 07:09 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@luis marmolejo You can conifgure Ranger Audit to go to Log4J appender. In this way a copy can be sent to file as you needed. Configure these properties via Ambari for the respective components if you are using Ambari for managing.
1 ) You need to enable auditing to log4j appender by adding the following property to ranger-<component>-audit.xml
<property>
<name>xasecure.audit.log4j.is.enabled</name>
<value>true</value>
</property>
<property>
<name>xasecure.audit.destination.log4j</name>
<value>true</value>
</property>
<property>
<name>xasecure.audit.destination.log4j.logger</name>
<value>xaaudit</value>
</property>
2) Add the appender to the log4j.properties or log4j.xml file for the <component>
ranger.logger=INFO,console,RANGERAUDIT
log4j.logger.xaaudit=${ranger.logger}
log4j.appender.RANGERAUDIT=org.apache.log4j.DailyRollingFileAppender
log4j.appender.RANGERAUDIT.File=/tmp/ranger_hdfs_audit.log
log4j.appender.RANGERAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.RANGERAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %L %m%n
log4j.appender.RANGERAUDIT.DatePattern=.yyyy-MM-dd
restart the respective component.
A copy of the Ranger Audit will be sent to /tmp/ranger_hdfs_audit.log ( in this case )
Created ‎06-02-2016 07:21 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Ramesh Mani, do we have references to enable this via Ambari? If we manually modify the config file, it will be overwritten next time Ambari restarts Ranger. This is assuming Ambari is used to manage the cluster.
Created ‎06-02-2016 07:39 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I dont see an internal reference for this. We need to create one.
Your are right. we need to do the configuration changes via Amabri for respective components if Ambari is used.
Created ‎06-03-2016 06:25 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Ramesh Mani @bdurai There is something missing.
I applied the indicated properties on HDP Sandbox via ambari and restarted the components and I immediately see the file created (zero length).
Run some queries from Beeline, and the file is never appended !!! (and new ones created either).
I changed the date pattern to do it every minute (fragment from "Advanced hive-log4j" in ambari):
ranger.logger=INFO,console,RANGERAUDIT log4j.logger.xaaudit=${ranger.logger} log4j.appender.RANGERAUDIT=org.apache.log4j.DailyRollingFileAppender log4j.appender.RANGERAUDIT.File=/tmp/ranger_hdfs_audit.log log4j.appender.RANGERAUDIT.layout=org.apache.log4j.PatternLayout log4j.appender.RANGERAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %L %m%n log4j.appender.RANGERAUDIT.DatePattern='.'yyyy-MM-dd-HH-mm
Has anybody tried this configuration ?
Created ‎06-04-2016 12:32 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@luis marmolejo Please check the permission of the file /tmp/ranger_hdfs_audit.log. Make sure it has rw permission for all others also. This is working fine.
Created ‎03-21-2018 03:59 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Is it possible to copy the ranger audit logs on to different server which is in the same network but outside of the cluster?
