Support Questions

Find answers, ask questions, and share your expertise

Kafka audit Logs stored in HDFS

avatar
Contributor

Hello,

I have a scenario with a Hadoop cluster installed with HDP2.6.5 and a Kafka cluster installed with HDF 3.3.0 with Ranger Service configured.

I want to store the Ranger Audit logs in HDFS so I setup in kafka the property xasecure.audit.destination.hdfs.dir pointing to the HDFS directory.

Case one: when using the namenode in the URI the logs are stored in HDFS successfully (xasecure.audit.destination.hdfs.dir=hdfs://namenode_FQDN>:8020/ranger/audit)

Case two: Using a haproxy, since i have namenode HA enabled and want to point always to the active NN, i get the following error

2019-04-02 12:00:13,841 ERROR [kafka.async.summary.multi_dest.batch_kafka.async.summary.multi_dest.batch.hdfs_destWriter] org.apache.ranger.audit.provider.BaseAuditHandler (BaseAuditHandler.java:329) - Error writing to log file.
java.io.IOException: DestHost:destPort <ha_proxy_hostname>:8085 , LocalHost:localPort <kafka_broker_hostname>/10.212.164.50:0. Failed on local exception: java.io.IOException: org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length

Is there any extra config to be set?

Thanks


1 ACCEPTED SOLUTION

avatar
Contributor

Hi, this is what i did:

  • In Ambari, select

          Kafka→ Configs→ Advanced ranger-kafka-audit and add the dfs destination dir

(if you have NameNode HA, you need to add to each kafka broker the hdfs-site.xml that has the nameservice property, so the audit logs should always hit the active namenode)

For example if you have defined the fs.defaultFS=nameservice you will add something like

xasecure.audit.destination.hdfs.dir=hdfs://nameservice/ranger/audit

Then restart the brokers.
Hope it helps
 

View solution in original post

10 REPLIES 10

avatar
New Contributor

Hi,

Could you please share hdf instalaltion document. I want to install HDF installation on my personal computer.

avatar
Contributor

avatar
Contributor

Anyone can help on this topic?

avatar
Contributor

Found how to proceed and now i can store the logs in HDFS

avatar
New Contributor

Could you please share how did you proceed ? 

Thank you

avatar
Contributor

Hi, this is what i did:

  • In Ambari, select

          Kafka→ Configs→ Advanced ranger-kafka-audit and add the dfs destination dir

(if you have NameNode HA, you need to add to each kafka broker the hdfs-site.xml that has the nameservice property, so the audit logs should always hit the active namenode)

For example if you have defined the fs.defaultFS=nameservice you will add something like

xasecure.audit.destination.hdfs.dir=hdfs://nameservice/ranger/audit

Then restart the brokers.
Hope it helps
 

avatar
Contributor

Sorry, forgot to add the port

the correct way will be

hdfs://nameservice:8020/ranger/audit

avatar
New Contributor

It works only if it's the same KDC, need cross trust (realm) in my case. 

Thank you. 

 

ps : I didn't got the notification too

 

Regards

avatar
Master Mentor

@psilvarochagome 
In this community, we share knowledge to advance the Cloudera community and don't get cash for that!  though some are real production issues, having said that it's unfortunate people like you got a solution to a problem being faced by a member and don't want to share as requested  by @slim_abderrahim  

It's very unfortunate  I hope member see this and tag you ... .........we open-source as opposed to proprietary code.  🙂
Happy hadooping