Support Questions

Find answers, ask questions, and share your expertise

Kafka audit Logs stored in HDFS

avatar
Contributor

Hello,

I have a scenario with a Hadoop cluster installed with HDP2.6.5 and a Kafka cluster installed with HDF 3.3.0 with Ranger Service configured.

I want to store the Ranger Audit logs in HDFS so I setup in kafka the property xasecure.audit.destination.hdfs.dir pointing to the HDFS directory.

Case one: when using the namenode in the URI the logs are stored in HDFS successfully (xasecure.audit.destination.hdfs.dir=hdfs://namenode_FQDN>:8020/ranger/audit)

Case two: Using a haproxy, since i have namenode HA enabled and want to point always to the active NN, i get the following error

2019-04-02 12:00:13,841 ERROR [kafka.async.summary.multi_dest.batch_kafka.async.summary.multi_dest.batch.hdfs_destWriter] org.apache.ranger.audit.provider.BaseAuditHandler (BaseAuditHandler.java:329) - Error writing to log file.
java.io.IOException: DestHost:destPort <ha_proxy_hostname>:8085 , LocalHost:localPort <kafka_broker_hostname>/10.212.164.50:0. Failed on local exception: java.io.IOException: org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length

Is there any extra config to be set?

Thanks


1 ACCEPTED SOLUTION

avatar
Contributor

Hi, this is what i did:

  • In Ambari, select

          Kafka→ Configs→ Advanced ranger-kafka-audit and add the dfs destination dir

(if you have NameNode HA, you need to add to each kafka broker the hdfs-site.xml that has the nameservice property, so the audit logs should always hit the active namenode)

For example if you have defined the fs.defaultFS=nameservice you will add something like

xasecure.audit.destination.hdfs.dir=hdfs://nameservice/ranger/audit

Then restart the brokers.
Hope it helps
 

View solution in original post

10 REPLIES 10

avatar
Contributor

Actually i didn't share because i didn't get the notification about this message. Obviously will do it.

Best

Paula