Support Questions
Find answers, ask questions, and share your expertise

Disallow anonymous topic creation in Kerberized environment

Contributor

Hello,

I am doing some tests on a Kerberized HDF cluster with Ranger enabled.

Using Kafka, I noticed that everybody can create/describe/delete topics from zookeeper without being authenticated.

This is an example, I used a server that is not part of the HDF cluster, and doesn't have Kerberos installed:

[root@test_node ~/kafka_2.11-1.1.1/bin]# ./kafka-topics.sh  --zookeeper zk_node:2181 --create -topic test_topic --partitions 1 --replication-factor 1
WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both.
Created topic "test_topic".
[root@test_node ~/kafka_2.11-1.1.1/bin]# ./kafka-topics.sh  --zookeeper zk_node:2181 --topic test_topic --describe
Topic:test_topic        PartitionCount:1        ReplicationFactor:1     Configs:
        Topic: test_topic       Partition: 0    Leader: 1004    Replicas: 1004  Isr: 1004
[root@test_node ~/kafka_2.11-1.1.1/bin]# ./kafka-topics.sh  --zookeeper zk_node:2181 --delete --topic test_topic
Topic test_topic is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.
[root@test_node ~/kafka_2.11-1.1.1/bin]# ./kafka-topics.sh  --zookeeper zk_node:2181 --topic test_topic --describe
[root@test_node ~/kafka_2.11-1.1.1/bin]# klist
-bash: klist: command not found





I have also been able to delete all the topics created/autocreated by authenticated users.

As you can see Kerberos is enabled when I try to consume/produce data on a topic:

[root@test_node2 /usr/hdf/current/kafka-broker/bin]# ./kafka-console-producer.sh --broker-list kafka_node:6668 --topic test_topic2 --security-protocol SASL_SSL --producer.config /root/client-ssl.properties
org.apache.kafka.common.KafkaException: Failed to construct kafka producer
        at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:456)
        at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:303)
        at kafka.producer.NewShinyProducer.<init>(BaseProducer.scala:40)
        at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:50)
        at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)
Caused by: org.apache.kafka.common.KafkaException: javax.security.auth.login.LoginException: Could not login: the client is being asked for a password, but the Kafka client code does not currently support obtaining a pa
ssword from the user. not available to garner  authentication information from the user
        at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:125)
        at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:141)
        at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:65)
        at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:88)
        at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:413)
        ... 4 more

Is there a way to prevent this from happening?

6 REPLIES 6

Mentor

@Raffaele S

Yes, there is a way you will need to do a couple of thing like setting ACL's and editing Kafka and zookeeper config files to achieve that. By default the znode open permissions so anyone can change or delete a topic or znode

[zk: localhost:2181(CONNECTED) 0] getAcl /zookeeper
'world,'anyone
: cdrwa
[zk: localhost:2181(CONNECTED) 1] 

Once you enable ACL's they will ONLY work on newly created topics so you restrict to r,w etc

You can achieve that but hardening zookeeper

Edit the ~/kafka/config/zookeeper.properties

# Add these 2 properties specify the authentication provider to be able to do Kerberos authentication

# Time to specify a time to take to renew the Kerberos ticket in milliseconds

authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider 
jaasLoginRenew=3600000 

Create a file zookeeper jaas file ~/kafka/config/zookeeper_jaas.conf

Server { 
com.sun.security.auth.module.Krb5LoginModule required 
useKeyTab=true 
keyTab="/path/to/zookeeper.service.keytab" 
storeKey=true 
useTicketCache=false 
serviceName="kafka" principal="zookeeper/FQDN@REALM"; 
}; 

Edit the kafka_jaas file add a client config for zookeeper

Client { 
com.sun.security.auth.module.Krb5LoginModule required 
useKeyTab=true keyTab="/path/to/zookeeper.service.keytab" 
storeKey=true 
useTicketCache=false 
serviceName="kafka" principal="zookeepera/FQDN@REALM"; }; 

Add the below variable in your zookeeper startup scripts

Environment="KAFKA_OPTS=Djava.security.auth.login.config=/path/to/your/zookeeper_jaas.conf" 

Now after modification of zookeeper and Kafka config file restart both zookeeper and kafka services Validate that the start

$ journalctl -u zookeeper | grep authenticated 

The output should have INFO successfully authenticated client authenticationID=zookeeper.......

$ journalctl -u kafka | grep saslauthenticated 

The output should have SaslAuthenticated Now create a new znode to test in zookeeper

Create /test-znode ""saslenabled" sasl:zookeeper/FQDN@REALM:cdwra 

Now try running the getAcl without Kerberos Authentication

getAcl /test-znode 'sasl,'zookeeper/FQDN@REALM :cdrwa get /test-znode 
Authenitication is not valid : /test-znode 

Open a second CLI with a Kerberos authentication by executing

export KAFKA_OPTS=Djava.security.auth.login.config={/path/to/your/zookeeper_jaas.conf}


getACL /test-znode 'sasl,'zookeeper/FQDN@REALM :cdrwa 

It's successful

Now go to the zookeeper kafka znode

ls /config/topics 
[oracle-import,first-topic, __consumer_offsets] 

getAcl /config/topics/first_topic 'world,'anyone :cdrwa 

This means anyone can read, write and even delete. It would be a tedious work to add all principals of all the Kafka brokers in a cluster, to avoid limiting access to only one host in the case of a Kafka cluster like in the above case you need to edit the Kafka principal

Connect to the Kafka host edit the kafka.properties and add a property

zookeeper.set.acl=true

Restart kafka after the above change this will force ONLY the new topics to have ACL to Kerberos principals used to connect to zookeeper

Now edit the Kafka zookeeper.properties file and add 2 properties to substitute host and REALM part of the Kerberos principal

kerberos.removeHostFromPrincipal=true 
kerberos.removeRealmFromPrincipal=true 

Restart Kafka/zookeeper this will now force ONLY the new topics to have ACL to Kerberos principals used to connect to zookeeper.

Now create a new topic first set the variable

export KAFKA_OPTS=Djava.security.auth.login.config={/path/to/your/zookeeper_jaas.conf} 

Create a new topic new-secure-topic

./bin/kafka-topic.sh --zoookeeper fqdn:2181 --create --topic new-secure-topic --replication-factor 1 --partitions 1 

{set to correct replication and partitions factor}

Now connect to zookeeper and check the permissions

./bin/zookeeper-shell.sh localhost:2181 

Now check the ACL note the zookeeper principal is now shortened and stripped of hostname and REALM, now everyone has only (r) read permission 🙂

getAcl /config/topics/new-secure-topic 
'world,'anyone 
:r 
'sasl,'zookeeper 
:cdrwa 

Compare with the former topic created

getAcl /config/topics/first_topic 
'world,'anyone 
:cdrwa

Now you can restrict all the permissions cdrwa on the kafka znode !! Note topics created before enabling ACL's will not be affected

Contributor

Hello Geoffrey,

first of all thanks, this looks promising. This seems to be completely outside Ambari control.

Does this mean that I need to manually create a zookeeper\{HOST}@{DOMAIN} keytab for each Kafka Node?

Is there some way to automate this?

Cloudera Employee

@Raffaele S

You can Enable Ranger plugin for Kafka.

After doing that for each topic level you can control describe/read/write/..etc by logging in Ranger UI and setting the policies.

Contributor

Hello @Soumitra Sulav,

the Ranger plugin for Kafka is enabled and working, I am able to control who can Produce/Consume on a Kafka topic but I am not able to control who can create/list/remove them.

Cloudera Employee

@Raffaele S You can follow below steps to create Kafka policy in Ranger to limit access per user on topic.

Enable Kafka Plugin in Ranger.

Go to ranger ui from Quick Links on Ranger component via Ambari :

Login with ranger admin user and password.

Click on <clustername>_kafka policy list under Kafka.

It will list current policies.

Click on Add New Policy button.

Fill in the Policy name.

In topic you can specify each topic name which you want to be controlled or put * for all topics to be governed by this policy.

Now come to Allow conditions :

You can put your users which you want to allow under 'Select User' similarly for groups.

In Add_Permissions you can see all topic related operations.

93467-rangerkafka.jpeg

You can further add deny conditions and ip address range as well exclude rom allow conditions.

I hope this answers your question. If yes please accept.

Contributor

@Soumitra Sulav

This is already similar to my configuration

93469-hdf-ranger-policy.png

The problem is that this configuration is not enforced by Ranger for everything related to creating/deleting topics (i.e. Zookeeper).

Ranger is only enforcing Publishing and Consuming from Kafka.

; ;