Support Questions

Find answers, ask questions, and share your expertise

kafka broker failed to start post disable kerberos

avatar
Rising Star

Team,

Post disabling kerberos kafka brokers failed to start with below error. I am using HDP 2.6 and ambari 2.6

Error:

[2018-02-13 04:44:11,112] FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable) java.lang.SecurityException: zookeeper.set.acl is true, but the verification of the JAAS login file failed. at kafka.server.KafkaServer.initZk(KafkaServer.scala:314) at kafka.server.KafkaServer.startup(KafkaServer.scala:200) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39) at kafka.Kafka$.main(Kafka.scala:67) at kafka.Kafka.main(Kafka.scala) [2018-02-13 04:44:11,113] INFO shutting down (kafka.server.KafkaServer)

Below find server.properties from kafka broker

[root@vijayhdf-1 conf]# cat server.properties # Generated by Apache Ambari. Tue Feb 13 04:53:24 2018 auto.create.topics.enable=true auto.leader.rebalance.enable=true broker.rack=/default-rack compression.type=producer controlled.shutdown.enable=true controlled.shutdown.max.retries=3 controlled.shutdown.retry.backoff.ms=5000 controller.message.queue.size=10 controller.socket.timeout.ms=30000 default.replication.factor=1 delete.topic.enable=false external.kafka.metrics.exclude.prefix=kafka.network.RequestMetrics,kafka.server.DelayedOperationPurgatory,kafka.server.BrokerTopicMetrics.BytesRejectedPerSec,kafka.server.KafkaServer.ClusterId external.kafka.metrics.include.prefix=kafka.network.RequestMetrics.ResponseQueueTimeMs.request.OffsetCommit.98percentile,kafka.network.RequestMetrics.ResponseQueueTimeMs.request.Offsets.95percentile,kafka.network.RequestMetrics.ResponseSendTimeMs.request.Fetch.95percentile,kafka.network.RequestMetrics.RequestsPerSec.request fetch.purgatory.purge.interval.requests=10000 kafka.ganglia.metrics.group=kafka kafka.ganglia.metrics.host=localhost kafka.ganglia.metrics.port=8671 kafka.ganglia.metrics.reporter.enabled=true kafka.metrics.reporters=org.apache.hadoop.metrics2.sink.kafka.KafkaTimelineMetricsReporter kafka.timeline.metrics.hosts=vijayhdp-1.novalocal kafka.timeline.metrics.maxRowCacheSize=10000 kafka.timeline.metrics.port=6188 kafka.timeline.metrics.protocol=http kafka.timeline.metrics.reporter.enabled=true kafka.timeline.metrics.reporter.sendInterval=5900 kafka.timeline.metrics.truststore.password=bigdata kafka.timeline.metrics.truststore.path=/etc/security/clientKeys/all.jks kafka.timeline.metrics.truststore.type=jks leader.imbalance.check.interval.seconds=300 leader.imbalance.per.broker.percentage=10 listeners=PLAINTEXT://vijayhdf-1.novalocal:6667 log.cleanup.interval.mins=10 log.dirs=/kafka-logs log.index.interval.bytes=4096 log.index.size.max.bytes=10485760 log.retention.bytes=-1 log.retention.hours=168 log.roll.hours=168 log.segment.bytes=1073741824 message.max.bytes=1000000 min.insync.replicas=1 num.io.threads=8 num.network.threads=3 num.partitions=1 num.recovery.threads.per.data.dir=1 num.replica.fetchers=1 offset.metadata.max.bytes=4096 offsets.commit.required.acks=-1 offsets.commit.timeout.ms=5000 offsets.load.buffer.size=5242880 offsets.retention.check.interval.ms=600000 offsets.retention.minutes=86400000 offsets.topic.compression.codec=0 offsets.topic.num.partitions=50 offsets.topic.replication.factor=3 offsets.topic.segment.bytes=104857600 port=6667 producer.purgatory.purge.interval.requests=10000 queued.max.requests=500 replica.fetch.max.bytes=1048576 replica.fetch.min.bytes=1 replica.fetch.wait.max.ms=500 replica.high.watermark.checkpoint.interval.ms=5000 replica.lag.max.messages=4000 replica.lag.time.max.ms=10000 replica.socket.receive.buffer.bytes=65536 replica.socket.timeout.ms=30000 sasl.kerberos.principal.to.local.rules=RULE:[1:$1@$0](ambari-qa-hdptest@NOVALOCAL.COM)s/.*/ambari-qa/,RULE:[1:$1@$0](hbase-hdptest@NOVALOCAL.COM)s/.*/hbase/,RULE:[1:$1@$0](hdfs-hdptest@NOVALOCAL.COM)s/.*/hdfs/,RULE:[1:$1@$0](.*@NOVALOCAL.COM)s/@.*//,RULE:[2:$1@$0](activity_analyzer@NOVALOCAL.COM)s/.*/activity_analyzer/,RULE:[2:$1@$0](activity_explorer@NOVALOCAL.COM)s/.*/activity_explorer/,RULE:[2:$1@$0](amshbase@NOVALOCAL.COM)s/.*/ams/,RULE:[2:$1@$0](amszk@NOVALOCAL.COM)s/.*/ams/,RULE:[2:$1@$0](dn@NOVALOCAL.COM)s/.*/hdfs/,RULE:[2:$1@$0](hbase@NOVALOCAL.COM)s/.*/hbase/,RULE:[2:$1@$0](hive@NOVALOCAL.COM)s/.*/hive/,RULE:[2:$1@$0](jhs@NOVALOCAL.COM)s/.*/mapred/,RULE:[2:$1@$0](jn@NOVALOCAL.COM)s/.*/hdfs/,RULE:[2:$1@$0](knox@NOVALOCAL.COM)s/.*/knox/,RULE:[2:$1@$0](nifi@NOVALOCAL.COM)s/.*/nifi/,RULE:[2:$1@$0](nm@NOVALOCAL.COM)s/.*/yarn/,RULE:[2:$1@$0](nn@NOVALOCAL.COM)s/.*/hdfs/,RULE:[2:$1@$0](rangeradmin@NOVALOCAL.COM)s/.*/ranger/,RULE:[2:$1@$0](rangertagsync@NOVALOCAL.COM)s/.*/rangertagsync/,RULE:[2:$1@$0](rangerusersync@NOVALOCAL.COM)s/.*/rangerusersync/,RULE:[2:$1@$0](rm@NOVALOCAL.COM)s/.*/yarn/,RULE:[2:$1@$0](yarn@NOVALOCAL.COM)s/.*/yarn/,DEFAULT socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 socket.send.buffer.bytes=102400 zookeeper.connect=vijayhdp-3.novalocal:2181,vijayhdp-2.novalocal:2181,vijayhdp-1.novalocal:2181 zookeeper.connection.timeout.ms=25000 zookeeper.session.timeout.ms=30000 zookeeper.set.acl=true zookeeper.sync.time.ms=2000

Kindly help me to fix the issue

1 ACCEPTED SOLUTION

avatar
@Vijay Mishra

One quick thing you can do is to change the kafka root zk node (this is create a new znode and kafka will not have any reference of any old data).

zookeeper.connect=zk1:2181,zk2:2181,zk3:2181/kafka

or you need to enable the kerberos again and change the ACl's using below commands (Kafka will have old data) :

Log-in as user "kafka" on one of Kafka nodes: 
kinit -k -t /etc/security/keytabs/kafka.service.keytab kafka/_HOST
where _HOST should be replaced by the hostname of that node 
Run the following command to open zookeeper shell: 
/usr/hdp/current/kafka-broker/bin/zookeeper-shell.sh zkhostname:2181 
setAcl /brokers world:anyone:crdwa
setAcl /config world:anyone:crdwa
setAcl /controller world:anyone:crdwa
setAcl /admin world:anyone:crdwa

Post changing the acl's disable the kerberos.

View solution in original post

4 REPLIES 4

avatar
@Vijay Mishra

zookeeper acl's needs to be changed before disabling the kerberos. Do you have any important data in kafka?

avatar
Rising Star

@ Sandeep Nemuri,

No there is no imp data inside kafka brokers.

Kindly suggest.

- Vijay Mishra

avatar
@Vijay Mishra

One quick thing you can do is to change the kafka root zk node (this is create a new znode and kafka will not have any reference of any old data).

zookeeper.connect=zk1:2181,zk2:2181,zk3:2181/kafka

or you need to enable the kerberos again and change the ACl's using below commands (Kafka will have old data) :

Log-in as user "kafka" on one of Kafka nodes: 
kinit -k -t /etc/security/keytabs/kafka.service.keytab kafka/_HOST
where _HOST should be replaced by the hostname of that node 
Run the following command to open zookeeper shell: 
/usr/hdp/current/kafka-broker/bin/zookeeper-shell.sh zkhostname:2181 
setAcl /brokers world:anyone:crdwa
setAcl /config world:anyone:crdwa
setAcl /controller world:anyone:crdwa
setAcl /admin world:anyone:crdwa

Post changing the acl's disable the kerberos.

avatar
Rising Star

@Sandeep Nemuri

I have done enable and disable the kerberos which has fixed the issue for kafka not coming up post disabling the kerberos.

Your solution also looks good, i will try the same if i will get the error.

Other problem i have is kafka principals and keytab not getting created after enabling the cluster on same cluster, Is there anything you can suggest?

- Vijay Mishra