Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

kafka broker not starting after enabling kerberos

Solved Go to solution

kafka broker not starting after enabling kerberos

I have configured zookeeper for kerberos and it has started but after configuring kafka for kerberos authentication, broker is not starting, giving below error,

./kafka-server-start.sh ../config/server.properties [2018-09-04 08:14:50,014] INFO KafkaConfig values: advertised.host.name = null advertised.listeners = SASL_PLAINTEXT://kafkaBrokerIP:9092 advertised.port = null authorizer.class.name = auto.create.topics.enable = true auto.leader.rebalance.enable = true background.threads = 10 broker.id = 1 broker.id.generation.enable = true broker.rack = null compression.type = gzip connections.max.idle.ms = 600000 controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 controlled.shutdown.retry.backoff.ms = 5000 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 2 delete.topic.enable = false fetch.purgatory.purge.interval.requests = 1000 group.max.session.timeout.ms = 300000 group.min.session.timeout.ms = 6000 host.name = kafkaBrokerIP inter.broker.listener.name = null inter.broker.protocol.version = 0.10.2-IV0 leader.imbalance.check.interval.seconds = 300 leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,TRACE:TRACE,SASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXT listeners = SASL_PLAINTEXT://kafkaBrokerIP:9092 log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 134217728 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = /tmp/kafka-logs log.dirs = /home/deepak/kafka/kafka-logs log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.message.format.version = 0.10.2-IV0 log.message.timestamp.difference.max.ms = 9223372036854775807 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 300000 log.retention.hours = 168 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = message.max.bytes = 40000000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 num.io.threads = 8 num.network.threads = 9 num.partitions = 2 num.recovery.threads.per.data.dir = 1 num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 1440 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 3 offsets.topic.segment.bytes = 104857600 port = 9092 principal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder producer.purgatory.purge.interval.requests = 1000 queued.max.requests = 500 quota.consumer.default = 9223372036854775807 quota.producer.default = 9223372036854775807 quota.window.num = 11 quota.window.size.seconds = 1 replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 104857600 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 10000 replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 reserved.broker.max.id = 1000 sasl.enabled.mechanisms = [GSSAPI] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name = kafka sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.mechanism.inter.broker.protocol = GSSAPI security.inter.broker.protocol = SASL_PLAINTEXT socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = null ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS unclean.leader.election.enable = true zookeeper.connect = kafkaBrokerIP:2182 zookeeper.connection.timeout.ms = 30000 zookeeper.session.timeout.ms = 30000 zookeeper.set.acl = false zookeeper.sync.time.ms = 2000 (kafka.server.KafkaConfig) [2018-09-04 08:14:50,058] INFO starting (kafka.server.KafkaServer) [2018-09-04 08:14:50,059] INFO Connecting to zookeeper on kafkaBrokerIP:2182 (kafka.server.KafkaServer) [2018-09-04 08:14:50,068] INFO JAAS File name: /home/deepak/kafka/kafka_jaas.conf (org.I0Itec.zkclient.ZkClient) [2018-09-04 08:14:50,069] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread) [2018-09-04 08:14:50,072] INFO Client environment:zookeeper.version=3.4.9-1757313, built on 08/23/2016 06:50 GMT (org.apache.zookeeper.ZooKeeper) [2018-09-04 08:14:50,072] INFO Client environment:host.name=ubuntu26.mstorm.com (org.apache.zookeeper.ZooKeeper) [2018-09-04 08:14:50,072] INFO Client environment:java.version=1.8.0_171 (org.apache.zookeeper.ZooKeeper) [2018-09-04 08:14:50,072] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper) [2018-09-04 08:14:50,072] INFO Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre (org.apache.zookeeper.ZooKeeper) [2018-09-04 08:14:50,072] INFO Client environment:java.class.path=:/home/deepak/kafka/bin/../libs/aopalliance-repackaged-2.5.0-b05.jar:/home/deepak/kafka/bin/../libs/argparse4j-0.7.0.jar:/home/deepak/kafka/bin/../libs/connect-api-0.10.2.0.jar:/home/deepak/kafka/bin/../libs/connect-file-0.10.2.0.jar:/home/deepak/kafka/bin/../libs/connect-json-0.10.2.0.jar:/home/deepak/kafka/bin/../libs/connect-runtime-0.10.2.0.jar:/home/deepak/kafka/bin/../libs/connect-transforms-0.10.2.0.jar:/home/deepak/kafka/bin/../libs/guava-18.0.jar:/home/deepak/kafka/bin/../libs/hk2-api-2.5.0-b05.jar:/home/deepak/kafka/bin/../libs/hk2-locator-2.5.0-b05.jar:/home/deepak/kafka/bin/../libs/hk2-utils-2.5.0-b05.jar:/home/deepak/kafka/bin/../libs/jackson-annotations-2.8.0.jar:/home/deepak/kafka/bin/../libs/jackson-annotations-2.8.5.jar:/home/deepak/kafka/bin/../libs/jackson-core-2.8.5.jar:/home/deepak/kafka/bin/../libs/jackson-databind-2.8.5.jar:/home/deepak/kafka/bin/../libs/jackson-jaxrs-base-2.8.5.jar:/home/deepak/kafka/bin/../libs/jackson-jaxrs-json-provider-2.8.5.jar:/home/deepak/kafka/bin/../libs/jackson-module-jaxb-annotations-2.8.5.jar:/home/deepak/kafka/bin/../libs/javassist-3.20.0-GA.jar:/home/deepak/kafka/bin/../libs/javax.annotation-api-1.2.jar:/home/deepak/kafka/bin/../libs/javax.inject-1.jar:/home/deepak/kafka/bin/../libs/javax.inject-2.5.0-b05.jar:/home/deepak/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/home/deepak/kafka/bin/../libs/javax.ws.rs-api-2.0.1.jar:/home/deepak/kafka/bin/../libs/jersey-client-2.24.jar:/home/deepak/kafka/bin/../libs/jersey-common-2.24.jar:/home/deepak/kafka/bin/../libs/jersey-container-servlet-2.24.jar:/home/deepak/kafka/bin/../libs/jersey-container-servlet-core-2.24.jar:/home/deepak/kafka/bin/../libs/jersey-guava-2.24.jar:/home/deepak/kafka/bin/../libs/jersey-media-jaxb-2.24.jar:/home/deepak/kafka/bin/../libs/jersey-server-2.24.jar:/home/deepak/kafka/bin/../libs/jetty-continuation-9.2.15.v20160210.jar:/home/deepak/kafka/bin/../libs/jetty-http-9.2.15.v20160210.jar:/home/deepak/kafka/bin/../libs/jetty-io-9.2.15.v20160210.jar:/home/deepak/kafka/bin/../libs/jetty-security-9.2.15.v20160210.jar:/home/deepak/kafka/bin/../libs/jetty-server-9.2.15.v20160210.jar:/home/deepak/kafka/bin/../libs/jetty-servlet-9.2.15.v20160210.jar:/home/deepak/kafka/bin/../libs/jetty-servlets-9.2.15.v20160210.jar:/home/deepak/kafka/bin/../libs/jetty-util-9.2.15.v20160210.jar:/home/deepak/kafka/bin/../libs/jopt-simple-5.0.3.jar:/home/deepak/kafka/bin/../libs/kafka_2.12-0.10.2.0.jar:/home/deepak/kafka/bin/../libs/kafka_2.12-0.10.2.0-sources.jar:/home/deepak/kafka/bin/../libs/kafka_2.12-0.10.2.0-test-sources.jar:/home/deepak/kafka/bin/../libs/kafka-clients-0.10.2.0.jar:/home/deepak/kafka/bin/../libs/kafka-log4j-appender-0.10.2.0.jar:/home/deepak/kafka/bin/../libs/kafka-streams-0.10.2.0.jar:/home/deepak/kafka/bin/../libs/kafka-streams-examples-0.10.2.0.jar:/home/deepak/kafka/bin/../libs/kafka-tools-0.10.2.0.jar:/home/deepak/kafka/bin/../libs/log4j-1.2.17.jar:/home/deepak/kafka/bin/../libs/lz4-1.3.0.jar:/home/deepak/kafka/bin/../libs/metrics-core-2.2.0.jar:/home/deepak/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/home/deepak/kafka/bin/../libs/reflections-0.9.10.jar:/home/deepak/kafka/bin/../libs/rocksdbjni-5.0.1.jar:/home/deepak/kafka/bin/../libs/scala-library-2.12.1.jar:/home/deepak/kafka/bin/../libs/scala-parser-combinators_2.12-1.0.4.jar:/home/deepak/kafka/bin/../libs/slf4j-api-1.7.21.jar:/home/deepak/kafka/bin/../libs/slf4j-log4j12-1.7.21.jar:/home/deepak/kafka/bin/../libs/snappy-java-1.1.2.6.jar:/home/deepak/kafka/bin/../libs/validation-api-1.1.0.Final.jar:/home/deepak/kafka/bin/../libs/zkclient-0.10.jar:/home/deepak/kafka/bin/../libs/zookeeper-3.4.9.jar (org.apache.zookeeper.ZooKeeper) [2018-09-04 08:14:50,073] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) [2018-09-04 08:14:50,073] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) [2018-09-04 08:14:50,073] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper) [2018-09-04 08:14:50,073] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) [2018-09-04 08:14:50,073] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) [2018-09-04 08:14:50,073] INFO Client environment:os.version=4.4.0-128-generic (org.apache.zookeeper.ZooKeeper) [2018-09-04 08:14:50,073] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper) [2018-09-04 08:14:50,073] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper) [2018-09-04 08:14:50,073] INFO Client environment:user.dir=/home/deepak/kafka/bin (org.apache.zookeeper.ZooKeeper) [2018-09-04 08:14:50,073] INFO Initiating client connection, connectString=kafkaBrokerIP:2182 sessionTimeout=30000 watcher=org.I0Itec.zkclient.ZkClient@56de5251 (org.apache.zookeeper.ZooKeeper) [2018-09-04 08:14:50,084] INFO Waiting for keeper state SaslAuthenticated (org.I0Itec.zkclient.ZkClient) Debug is true storeKey true useTicketCache false useKeyTab true doNotPrompt false ticketCache is null isInitiator true KeyTab is /etc/kafka/kafka.keytab refreshKrb5Config is false principal is kafka@MSTORM.COM tryFirstPass is false useFirstPass is false storePass is false clearPass is false >>> KeyTabInputStream, readName(): MSTORM >>> KeyTabInputStream, readName(): kafka >>> KeyTab: load() entry length: 62; type: 18 >>> KeyTabInputStream, readName(): MSTORM >>> KeyTabInputStream, readName(): kafka >>> KeyTab: load() entry length: 46; type: 17 >>> KeyTabInputStream, readName(): MSTORM >>> KeyTabInputStream, readName(): kafka >>> KeyTab: load() entry length: 54; type: 16 >>> KeyTabInputStream, readName(): MSTORM >>> KeyTabInputStream, readName(): kafka >>> KeyTab: load() entry length: 46; type: 23 Looking for keys for: kafka@MSTORM.COM Key for the principal kafka@MSTORM.COM not available in /etc/kafka/kafka.keytab [2018-09-04 08:14:50,088] WARN Could not login: the client is being asked for a password, but the Zookeeper client code does not currently support obtaining a password from the user. Make sure that the client is configured to use a ticket cache (using the JAAS configuration setting 'useTicketCache=true)' and restart the client. If you still get this message after that, the TGT in the ticket cache has expired and must be manually refreshed. To do so, first determine if you are using a password or a keytab. If the former, run kinit in a Unix shell in the environment of the user who is running this Zookeeper client using the command 'kinit <princ>' (where <princ> is the name of the client's Kerberos principal). If the latter, do 'kinit -k -t <keytab> <princ>' (where <princ> is the name of the Kerberos principal, and <keytab> is the location of the keytab file). After manually refreshing your cache, restart this client. If you continue to see this message after manually refreshing your cache, ensure that your KDC host's clock is in sync with this host's clock. (org.apache.zookeeper.client.ZooKeeperSaslClient) [Krb5LoginModule] authentication failed No password provided [2018-09-04 08:14:50,089] WARN SASL configuration failed: javax.security.auth.login.LoginException: No password provided Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. (org.apache.zookeeper.ClientCnxn) [2018-09-04 08:14:50,090] INFO Opening socket connection to server kafkaBrokerIP/kafkaBrokerIP:2182 (org.apache.zookeeper.ClientCnxn) [2018-09-04 08:14:50,090] INFO zookeeper state changed (AuthFailed) (org.I0Itec.zkclient.ZkClient) [2018-09-04 08:14:50,090] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread) [2018-09-04 08:14:50,093] INFO Socket connection established to kafkaBrokerIP/kafkaBrokerIP:2182, initiating session (org.apache.zookeeper.ClientCnxn) [2018-09-04 08:14:50,114] INFO Session establishment complete on server kafkaBrokerIP/kafkaBrokerIP:2182, sessionid = 0x165a472b05e0002, negotiated timeout = 30000 (org.apache.zookeeper.ClientCnxn) [2018-09-04 08:14:50,120] INFO Session: 0x165a472b05e0002 closed (org.apache.zookeeper.ZooKeeper) [2018-09-04 08:14:50,121] INFO EventThread shut down for session: 0x165a472b05e0002 (org.apache.zookeeper.ClientCnxn) [2018-09-04 08:14:50,121] FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) org.I0Itec.zkclient.exception.ZkAuthFailedException: Authentication failure at org.I0Itec.zkclient.ZkClient.waitForKeeperState(ZkClient.java:947) at org.I0Itec.zkclient.ZkClient.waitUntilConnected(ZkClient.java:924) at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1231) at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:157) at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:131) at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:79) at kafka.utils.ZkUtils$.apply(ZkUtils.scala:61) at kafka.server.KafkaServer.initZk(KafkaServer.scala:329) at kafka.server.KafkaServer.startup(KafkaServer.scala:187) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39) at kafka.Kafka$.main(Kafka.scala:67) at kafka.Kafka.main(Kafka.scala) [2018-09-04 08:14:50,123] INFO shutting down (kafka.server.KafkaServer) [2018-09-04 08:14:50,127] INFO shut down completed (kafka.server.KafkaServer) [2018-09-04 08:14:50,128] FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable) org.I0Itec.zkclient.exception.ZkAuthFailedException: Authentication failure at org.I0Itec.zkclient.ZkClient.waitForKeeperState(ZkClient.java:947) at org.I0Itec.zkclient.ZkClient.waitUntilConnected(ZkClient.java:924) at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1231) at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:157) at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:131) at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:79) at kafka.utils.ZkUtils$.apply(ZkUtils.scala:61) at kafka.server.KafkaServer.initZk(KafkaServer.scala:329) at kafka.server.KafkaServer.startup(KafkaServer.scala:187) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39) at kafka.Kafka$.main(Kafka.scala:67) at kafka.Kafka.main(Kafka.scala) [2018-09-04 08:14:50,129] INFO shutting down (kafka.server.KafkaServer) root@ubuntu26:/home/deepak/kafka/bin# vim ../zookeeper_jaas.conf root@ubuntu26:/home/deepak/kafka/bin# vim ../kafka_jaas.conf root@ubuntu26:/home/deepak/kafka/bin# kinit kafka@MSTORM.COM kinit: Cannot find KDC for realm "MSTORM.COM" while getting initial credentials root@ubuntu26:/home/deepak/kafka/bin# kinit -k -t /etc/kafka/kafka.keytab kafka@MSTORM.COM kinit: Keytab contains no suitable keys for kafka@MSTORM.COM while getting initial credentials

Can you check what is the issue here?

1 ACCEPTED SOLUTION

Accepted Solutions
Highlighted

Re: kafka broker not starting after enabling kerberos

Mentor

@Ankita Ghate

The log files in your case show /etc/kafka/kafka.keytab is that the correct location for the kafka keytabs? On Centos/RHEl the correct location is usually /etc/security/keytabs/* please use the appropriate location on Ubuntu

It seems you have an issue with your kerbers configuration can you validate the following files these paths are valid for Centos/RHEL so adapt them to your ubuntu installation.

The contents of krb5.conf .conf in /etc/krb5.conf

[libdefaults]  
	renew_lifetime =7d  
	forwardable =true  
	default_realm = MSTORM.COM   
	ticket_lifetime =24h  
	dns_lookup_realm =false  
	dns_lookup_kdc =false  
	default_ccache_name =/tmp/krb5cc_%{uid}
	#default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
	#default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
	
	[domain_realm]
	.mstorm.com = MSTORM.COM   
	mstorm.com = MSTORM.COM 

	[logging]
	default= FILE:/var/log/krb5kdc.log  
	admin_server = FILE:/var/log/kadmind.log  
	kdc = FILE:/var/log/krb5kdc.log

[realms]  
	MSTORM.COM  ={    
	admin_server = kdc.mstorm.com    
	kdc = kdc.mstorm.com
	}

The contents of kdc.conf in /var/kerberos/krb5kdc/kdc.conf

[kdcdefaults] 
	kdc_ports =88 
	kdc_tcp_ports =88

	[realms] 
	MSTORM.COM ={
	#master_key_type = aes256-cts  
	acl_file =/var/kerberos/krb5kdc/kadm5.acl  
	dict_file =/usr/share/dict/words  
	admin_keytab =/var/kerberos/krb5kdc/kadm5.keytab  
	supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des	-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
	}

The contents of kadm5.acl in /var/kerberos/krb5kdc/kadm5.acl

*/admin@MSTORM.COM *

If you modified the above files restart the KDC and kadmin on linux Centos

# service krb5kdc start # service kadmin start 

Check and use the matching principal for the keytab

$ klist -kt /etc/security/keytabs/kafka.service.keytab
$ klist -kt /etc/security/keytabs/kafka.service.keytab
Keytab name: FILE:/etc/security/keytabs/kafka.service.keytab
KVNO         		Timestamp			Principal
-----------------------------------------------------------------------------
111/15/1701:00:50    kafka/kafka1.hostname.com@MSTORM.COM
111/15/1701:00:50    kafka/kafka1.hostname.com@MSTORM.COM
111/15/1701:00:50    kafka/kafka1.hostname.com@MSTORM.COM
111/15/1701:00:50    kafka/kafka1.hostname.com@MSTORM.COM
111/15/1701:00:50    kafka/kafka1.hostname.com@MSTORM.COM 

Then try grabbing a ticket

# kinit -kt /etc/security/keytabs/kafka.service.keytab kafka/kafka1.hostname.com@MSTORM.COM
# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal:kafka/kafka1.hostname.com@MSTORM.COM
Valid 		starting     	Expires			Service principal
09/05/18	22:29:11	09/06/1822:29:11  	krbtgt/MSTORM.COM@MSTORM.COM        
renew until09/05/1822:29:11

The above should succeed

This is how the kafka client jaas.conf fle should look like /usr/hdp/current/kafka-broker/config/kafka_client_jaas.conf

KafkaClient{     
	com.sun.security.auth.module.Krb5Login
	Module required     use
	KeyTab=true     keyTab="/etc/security/keytabs/kafka.service.keytab"     
	storeKey=true     
	useTicketCache=false     
	serviceName="kafka"     
	principal="kafka/kafka1.hostname.com@MSTORM.COM";
	};

Now you can retry it should start

View solution in original post

3 REPLIES 3
Highlighted

Re: kafka broker not starting after enabling kerberos

below is the output of klist,

klist -ket /etc/kafka/kafka.keytab

Keytab name: FILE:/etc/kafka/kafka.keytab KVNO Timestamp Principal


2 09/04/2018 07:52:11 kafka@MSTORM (aes256-cts-hmac-sha1-96) 2 09/04/2018 07:52:11 kafka@MSTORM (aes128-cts-hmac-sha1-96) 2 09/04/2018 07:52:11 kafka@MSTORM (des3-cbc-sha1) 2 09/04/2018 07:52:11 kafka@MSTORM (arcfour-hmac)

klist -ket /etc/kafka/zookeeper.keytab

Keytab name: FILE:/etc/kafka/zookeeper.keytab KVNO Timestamp Principal


2 09/04/2018 07:51:50 zookeeper@MSTORM (aes256-cts-hmac-sha1-96) 2 09/04/2018 07:51:50 zookeeper@MSTORM (aes128-cts-hmac-sha1-96) 2 09/04/2018 07:51:50 zookeeper@MSTORM (des3-cbc-sha1) 2 09/04/2018 07:51:50 zookeeper@MSTORM (arcfour-hmac)

klist -ket /etc/kafka/kafka-client.keytab

Keytab name: FILE:/etc/kafka/kafka-client.keytab KVNO Timestamp Principal


2 09/04/2018 07:52:31 kafka-client@MSTORM (aes256-cts-hmac-sha1-96) 2 09/04/2018 07:52:31 kafka-client@MSTORM (aes128-cts-hmac-sha1-96) 2 09/04/2018 07:52:31 kafka-client@MSTORM (des3-cbc-sha1) 2 09/04/2018 07:52:31 kafka-client@MSTORM (arcfour-hmac)

Highlighted

Re: kafka broker not starting after enabling kerberos

Mentor

@Ankita Ghate

The log files in your case show /etc/kafka/kafka.keytab is that the correct location for the kafka keytabs? On Centos/RHEl the correct location is usually /etc/security/keytabs/* please use the appropriate location on Ubuntu

It seems you have an issue with your kerbers configuration can you validate the following files these paths are valid for Centos/RHEL so adapt them to your ubuntu installation.

The contents of krb5.conf .conf in /etc/krb5.conf

[libdefaults]  
	renew_lifetime =7d  
	forwardable =true  
	default_realm = MSTORM.COM   
	ticket_lifetime =24h  
	dns_lookup_realm =false  
	dns_lookup_kdc =false  
	default_ccache_name =/tmp/krb5cc_%{uid}
	#default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
	#default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
	
	[domain_realm]
	.mstorm.com = MSTORM.COM   
	mstorm.com = MSTORM.COM 

	[logging]
	default= FILE:/var/log/krb5kdc.log  
	admin_server = FILE:/var/log/kadmind.log  
	kdc = FILE:/var/log/krb5kdc.log

[realms]  
	MSTORM.COM  ={    
	admin_server = kdc.mstorm.com    
	kdc = kdc.mstorm.com
	}

The contents of kdc.conf in /var/kerberos/krb5kdc/kdc.conf

[kdcdefaults] 
	kdc_ports =88 
	kdc_tcp_ports =88

	[realms] 
	MSTORM.COM ={
	#master_key_type = aes256-cts  
	acl_file =/var/kerberos/krb5kdc/kadm5.acl  
	dict_file =/usr/share/dict/words  
	admin_keytab =/var/kerberos/krb5kdc/kadm5.keytab  
	supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des	-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
	}

The contents of kadm5.acl in /var/kerberos/krb5kdc/kadm5.acl

*/admin@MSTORM.COM *

If you modified the above files restart the KDC and kadmin on linux Centos

# service krb5kdc start # service kadmin start 

Check and use the matching principal for the keytab

$ klist -kt /etc/security/keytabs/kafka.service.keytab
$ klist -kt /etc/security/keytabs/kafka.service.keytab
Keytab name: FILE:/etc/security/keytabs/kafka.service.keytab
KVNO         		Timestamp			Principal
-----------------------------------------------------------------------------
111/15/1701:00:50    kafka/kafka1.hostname.com@MSTORM.COM
111/15/1701:00:50    kafka/kafka1.hostname.com@MSTORM.COM
111/15/1701:00:50    kafka/kafka1.hostname.com@MSTORM.COM
111/15/1701:00:50    kafka/kafka1.hostname.com@MSTORM.COM
111/15/1701:00:50    kafka/kafka1.hostname.com@MSTORM.COM 

Then try grabbing a ticket

# kinit -kt /etc/security/keytabs/kafka.service.keytab kafka/kafka1.hostname.com@MSTORM.COM
# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal:kafka/kafka1.hostname.com@MSTORM.COM
Valid 		starting     	Expires			Service principal
09/05/18	22:29:11	09/06/1822:29:11  	krbtgt/MSTORM.COM@MSTORM.COM        
renew until09/05/1822:29:11

The above should succeed

This is how the kafka client jaas.conf fle should look like /usr/hdp/current/kafka-broker/config/kafka_client_jaas.conf

KafkaClient{     
	com.sun.security.auth.module.Krb5Login
	Module required     use
	KeyTab=true     keyTab="/etc/security/keytabs/kafka.service.keytab"     
	storeKey=true     
	useTicketCache=false     
	serviceName="kafka"     
	principal="kafka/kafka1.hostname.com@MSTORM.COM";
	};

Now you can retry it should start

View solution in original post

Highlighted

Re: kafka broker not starting after enabling kerberos

@Geoffrey Shelton Okot thanks for the response. Kafka has started but its logs has error.

PFA kafka-zookeeper-kerberos-invalidacl.txt file,

Don't have an account?
Coming from Hortonworks? Activate your account here