Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Kafka with Kerberos - WARN [Principal=null]: TGT renewal thread has been interrupted and will exit.

avatar
Contributor

Need help if anyone is aware:

Platform Ambari (HDP 3.1) with Kerberos Enabled, I am trying to push command after that I did KINIT -V -kt kafka.keytab kafka-principal@realm and got "Authenticated to Kerberos v5":

 

./kafka-console-producer.sh --broker-list host.kafka:6667 --topic cleanCsv --producer-property security.protocol=SASL_PLAINTEXT < /tmp/clean_csv_full.csv

 

 

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>[2022-11-17 17:06:35,693] WARN [Principal=null]: TGT renewal thread has been interrupted and will exit. (org.apache.kafka.common.security.kerberos.KerberosLogin)

 

 


No issues with creation of topics or other, just when trying to push csv inside I am getting that error, when without Kerberos all goes good and smoothly uploads it.

 

Any help is really very helpful, thank you in advance and looking forward to your reply

5 REPLIES 5

avatar
Contributor

Here is after Debug is enabled, please maybe someone can help what is wrong and why I am getting that WARN >

 [2023-01-08 03:23:08,654] WARN [Principal=null]: TGT renewal thread has been interrupted and will exit. (org.apache.kafka.common.security.kerberos.KerberosLogin)

 

$> kafka-console-producer.sh --broker-list test.kafka:6667 --topic cleanCsv --producer-property security.protocol=SASL_PLAINTEXT < /tmp/clean_csv_full.csv

Picked up _JAVA_OPTIONS: -Xmx2048m -XX:MaxPermSize=512m -Djava.awt.headless=true
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0
[2023-01-08 03:23:06,861] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2023-01-08 03:23:06,922] INFO ProducerConfig values:
	acks = 1
	batch.size = 16384
	bootstrap.servers = [test.kafka:6667]
	buffer.memory = 33554432
	client.id = console-producer
	compression.type = none
	connections.max.idle.ms = 540000
	enable.idempotence = false
	interceptor.classes = []
	key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
	linger.ms = 1000
	max.block.ms = 60000
	max.in.flight.requests.per.connection = 5
	max.request.size = 1048576
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
	receive.buffer.bytes = 32768
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 1500
	retries = 3
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism = GSSAPI
	security.protocol = SASL_PLAINTEXT
	send.buffer.bytes = 102400
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.timeout.ms = 60000
	transactional.id = null
	value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
 (org.apache.kafka.clients.producer.ProducerConfig)
[2023-01-08 03:23:06,939] DEBUG Added sensor with name bufferpool-wait-time (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:06,942] DEBUG Added sensor with name buffer-exhausted-records (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:06,945] DEBUG Updated cluster metadata version 1 to Cluster(id = null, nodes = [test.kafka:6667 (id: -1 rack: null)], partitions = [], controller = null) (org.apache.kafka.clients.Metadata)
[2023-01-08 03:23:07,011] INFO Successfully logged in. (org.apache.kafka.common.security.authenticator.AbstractLogin)
[2023-01-08 03:23:07,012] DEBUG [Principal=null]: It is a Kerberos ticket (org.apache.kafka.common.security.kerberos.KerberosLogin)
[2023-01-08 03:23:07,014] INFO [Principal=null]: TGT refresh thread started. (org.apache.kafka.common.security.kerberos.KerberosLogin)
[2023-01-08 03:23:07,014] DEBUG Found TGT with client principal 'kafka/test.kafka@TEST' and server principal 'krbtgt/TEST@TEST'. (org.apache.kafka.common.security.kerberos.KerberosLogin)
[2023-01-08 03:23:07,017] DEBUG Added sensor with name produce-throttle-time (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:07,024] INFO [Principal=null]: TGT valid starting at: Sun Jan 08 03:15:26 UTC 2023 (org.apache.kafka.common.security.kerberos.KerberosLogin)
[2023-01-08 03:23:07,025] INFO [Principal=null]: TGT expires: Sun Jan 08 13:15:26 UTC 2023 (org.apache.kafka.common.security.kerberos.KerberosLogin)
[2023-01-08 03:23:07,025] INFO [Principal=null]: TGT refresh sleeping until: Sun Jan 08 11:42:44 UTC 2023 (org.apache.kafka.common.security.kerberos.KerberosLogin)
[2023-01-08 03:23:07,026] DEBUG Added sensor with name connections-closed: (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:07,027] DEBUG Added sensor with name connections-created: (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:07,031] DEBUG Added sensor with name successful-authentication: (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:07,031] DEBUG Added sensor with name failed-authentication: (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:07,032] DEBUG Added sensor with name bytes-sent-received: (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:07,032] DEBUG Added sensor with name bytes-sent: (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:07,033] DEBUG Added sensor with name bytes-received: (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:07,034] DEBUG Added sensor with name select-time: (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:07,035] DEBUG Added sensor with name io-time: (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:07,039] DEBUG Added sensor with name batch-size (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:07,039] DEBUG Added sensor with name compression-rate (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:07,039] DEBUG Added sensor with name queue-time (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:07,039] DEBUG Added sensor with name request-time (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:07,040] DEBUG Added sensor with name records-per-request (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:07,040] DEBUG Added sensor with name record-retries (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:07,040] DEBUG Added sensor with name errors (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:07,041] DEBUG Added sensor with name record-size (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:07,042] DEBUG Added sensor with name batch-split-rate (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:07,042] DEBUG [Producer clientId=console-producer] Starting Kafka producer I/O thread. (org.apache.kafka.clients.producer.internals.Sender)
[2023-01-08 03:23:07,044] INFO Kafka version : 2.0.0.3.1.5.6091-7 (org.apache.kafka.common.utils.AppInfoParser)
[2023-01-08 03:23:07,044] INFO Kafka commitId : a10a6e16779f1930 (org.apache.kafka.common.utils.AppInfoParser)
[2023-01-08 03:23:07,045] DEBUG [Producer clientId=console-producer] Kafka producer started (org.apache.kafka.clients.producer.KafkaProducer)
>[2023-01-08 03:23:07,062] DEBUG [Producer clientId=console-producer] Initialize connection to node test.kafka:6667 (id: -1 rack: null) for sending metadata request (org.apache.kafka.clients.NetworkClient)
[2023-01-08 03:23:07,062] DEBUG [Producer clientId=console-producer] Initiating connection to node test.kafka:6667 (id: -1 rack: null) (org.apache.kafka.clients.NetworkClient)
[2023-01-08 03:23:07,069] DEBUG Set SASL client state to SEND_APIVERSIONS_REQUEST (org.apache.kafka.common.security.authenticator.SaslClientAuthenticator)
[2023-01-08 03:23:07,070] DEBUG Creating SaslClient: client=kafka/test.kafka@TEST;service=kafka;serviceHostname=test.kafka;mechs=[GSSAPI] (org.apache.kafka.common.security.authenticator.SaslClientAuthenticator)
[2023-01-08 03:23:07,105] DEBUG Added sensor with name node--1.bytes-sent (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:07,106] DEBUG Added sensor with name node--1.bytes-received (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:07,106] DEBUG Added sensor with name node--1.latency (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:07,107] DEBUG [Producer clientId=console-producer] Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 102400, SO_TIMEOUT = 0 to node -1 (org.apache.kafka.common.network.Selector)
[2023-01-08 03:23:07,187] DEBUG Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE (org.apache.kafka.common.security.authenticator.SaslClientAuthenticator)
[2023-01-08 03:23:07,188] DEBUG [Producer clientId=console-producer] Completed connection to node -1. Fetching API versions. (org.apache.kafka.clients.NetworkClient)
[2023-01-08 03:23:07,190] DEBUG Set SASL client state to SEND_HANDSHAKE_REQUEST (org.apache.kafka.common.security.authenticator.SaslClientAuthenticator)
[2023-01-08 03:23:07,190] DEBUG Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE (org.apache.kafka.common.security.authenticator.SaslClientAuthenticator)
[2023-01-08 03:23:07,191] DEBUG Set SASL client state to INITIAL (org.apache.kafka.common.security.authenticator.SaslClientAuthenticator)
[2023-01-08 03:23:07,307] DEBUG Set SASL client state to INTERMEDIATE (org.apache.kafka.common.security.authenticator.SaslClientAuthenticator)
[2023-01-08 03:23:07,310] DEBUG Set SASL client state to CLIENT_COMPLETE (org.apache.kafka.common.security.authenticator.SaslClientAuthenticator)
[2023-01-08 03:23:07,310] DEBUG Set SASL client state to COMPLETE (org.apache.kafka.common.security.authenticator.SaslClientAuthenticator)
[2023-01-08 03:23:07,310] DEBUG [Producer clientId=console-producer] Initiating API versions fetch from node -1. (org.apache.kafka.clients.NetworkClient)
[2023-01-08 03:23:07,314] DEBUG [Producer clientId=console-producer] Recorded API versions for node -1: (Produce(0): 0 to 6 [usable: 6], Fetch(1): 0 to 8 [usable: 8], ListOffsets(2): 0 to 3 [usable: 3], Metadata(3): 0 to 6 [usable: 6], LeaderAndIsr(4): 0 to 1 [usable: 1], StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 4 [usable: 4], ControlledShutdown(7): 0 to 1 [usable: 1], OffsetCommit(8): 0 to 4 [usable: 4], OffsetFetch(9): 0 to 4 [usable: 4], FindCoordinator(10): 0 to 2 [usable: 2], JoinGroup(11): 0 to 3 [usable: 3], Heartbeat(12): 0 to 2 [usable: 2], LeaveGroup(13): 0 to 2 [usable: 2], SyncGroup(14): 0 to 2 [usable: 2], DescribeGroups(15): 0 to 2 [usable: 2], ListGroups(16): 0 to 2 [usable: 2], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 2 [usable: 2], CreateTopics(19): 0 to 3 [usable: 3], DeleteTopics(20): 0 to 2 [usable: 2], DeleteRecords(21): 0 to 1 [usable: 1], InitProducerId(22): 0 to 1 [usable: 1], OffsetForLeaderEpoch(23): 0 to 1 [usable: 1], AddPartitionsToTxn(24): 0 to 1 [usable: 1], AddOffsetsToTxn(25): 0 to 1 [usable: 1], EndTxn(26): 0 to 1 [usable: 1], WriteTxnMarkers(27): 0 [usable: 0], TxnOffsetCommit(28): 0 to 1 [usable: 1], DescribeAcls(29): 0 to 1 [usable: 1], CreateAcls(30): 0 to 1 [usable: 1], DeleteAcls(31): 0 to 1 [usable: 1], DescribeConfigs(32): 0 to 2 [usable: 2], AlterConfigs(33): 0 to 1 [usable: 1], AlterReplicaLogDirs(34): 0 to 1 [usable: 1], DescribeLogDirs(35): 0 to 1 [usable: 1], SaslAuthenticate(36): 0 [usable: 0], CreatePartitions(37): 0 to 1 [usable: 1], CreateDelegationToken(38): 0 to 1 [usable: 1], RenewDelegationToken(39): 0 to 1 [usable: 1], ExpireDelegationToken(40): 0 to 1 [usable: 1], DescribeDelegationToken(41): 0 to 1 [usable: 1], DeleteGroups(42): 0 to 1 [usable: 1]) (org.apache.kafka.clients.NetworkClient)
[2023-01-08 03:23:07,314] DEBUG [Producer clientId=console-producer] Sending metadata request (type=MetadataRequest, topics=cleanCsv) to node test.kafka:6667 (id: -1 rack: null) (org.apache.kafka.clients.NetworkClient)
[2023-01-08 03:23:07,321] INFO Cluster ID: g5oEj_9HRpS5nx3hsiK-hA (org.apache.kafka.clients.Metadata)
[2023-01-08 03:23:07,321] DEBUG Updated cluster metadata version 2 to Cluster(id = g5oEj_9HRpS5nx3hsiK-hA, nodes = [test.kafka:6667 (id: 1001 rack: null)], partitions = [Partition(topic = cleanCsv, partition = 0, leader = 1001, replicas = [1001], isr = [1001], offlineReplicas = [])], controller = test.kafka:6667 (id: 1001 rack: null)) (org.apache.kafka.clients.Metadata)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>[2023-01-08 03:23:07,340] DEBUG [Producer clientId=console-producer] Initiating connection to node test.kafka:6667 (id: 1001 rack: null) (org.apache.kafka.clients.NetworkClient)
>>[2023-01-08 03:23:07,340] DEBUG Set SASL client state to SEND_APIVERSIONS_REQUEST (org.apache.kafka.common.security.authenticator.SaslClientAuthenticator)
[2023-01-08 03:23:07,340] DEBUG Creating SaslClient: client=kafka/test.kafka@TEST;service=kafka;serviceHostname=test.kafka;mechs=[GSSAPI] (org.apache.kafka.common.security.authenticator.SaslClientAuthenticator)
>>>>>>>>>>>>[2023-01-08 03:23:07,341] DEBUG Added sensor with name node-1001.bytes-sent (org.apache.kafka.common.metrics.Metrics)
>>>>>>>>>>>>[2023-01-08 03:23:07,342] DEBUG Added sensor with name node-1001.bytes-received (org.apache.kafka.common.metrics.Metrics)
>>>>>>>[2023-01-08 03:23:07,343] DEBUG Added sensor with name node-1001.latency (org.apache.kafka.common.metrics.Metrics)
>>>[2023-01-08 03:23:07,343] DEBUG [Producer clientId=console-producer] Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 102400, SO_TIMEOUT = 0 to node 1001 (org.apache.kafka.common.network.Selector)
>>>>[2023-01-08 03:23:07,344] DEBUG Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE (org.apache.kafka.common.security.authenticator.SaslClientAuthenticator)
>[2023-01-08 03:23:07,344] DEBUG [Producer clientId=console-producer] Completed connection to node 1001. Fetching API versions. (org.apache.kafka.clients.NetworkClient)
>>>>>[2023-01-08 03:23:07,346] DEBUG Set SASL client state to SEND_HANDSHAKE_REQUEST (org.apache.kafka.common.security.authenticator.SaslClientAuthenticator)
>>>>>[2023-01-08 03:23:07,347] DEBUG Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE (org.apache.kafka.common.security.authenticator.SaslClientAuthenticator)
[2023-01-08 03:23:07,347] DEBUG Set SASL client state to INITIAL (org.apache.kafka.common.security.authenticator.SaslClientAuthenticator)
>>>>>>[2023-01-08 03:23:07,349] DEBUG Set SASL client state to INTERMEDIATE (org.apache.kafka.common.security.authenticator.SaslClientAuthenticator)
>>>>>>>>>>>>>>>>>>>>>>>>>[2023-01-08 03:23:07,351] DEBUG Set SASL client state to CLIENT_COMPLETE (org.apache.kafka.common.security.authenticator.SaslClientAuthenticator)
>>>>>>>>>[2023-01-08 03:23:07,352] DEBUG Set SASL client state to COMPLETE (org.apache.kafka.common.security.authenticator.SaslClientAuthenticator)
>>[2023-01-08 03:23:07,352] DEBUG [Producer clientId=console-producer] Initiating API versions fetch from node 1001. (org.apache.kafka.clients.NetworkClient)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>[2023-01-08 03:23:07,354] DEBUG [Producer clientId=console-producer] Recorded API versions for node 1001: (Produce(0): 0 to 6 [usable: 6], Fetch(1): 0 to 8 [usable: 8], ListOffsets(2): 0 to 3 [usable: 3], Metadata(3): 0 to 6 [usable: 6], LeaderAndIsr(4): 0 to 1 [usable: 1], StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 4 [usable: 4], ControlledShutdown(7): 0 to 1 [usable: 1], OffsetCommit(8): 0 to 4 [usable: 4], OffsetFetch(9): 0 to 4 [usable: 4], FindCoordinator(10): 0 to 2 [usable: 2], JoinGroup(11): 0 to 3 [usable: 3], Heartbeat(12): 0 to 2 [usable: 2], LeaveGroup(13): 0 to 2 [usable: 2], SyncGroup(14): 0 to 2 [usable: 2], DescribeGroups(15): 0 to 2 [usable: 2], ListGroups(16): 0 to 2 [usable: 2], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 2 [usable: 2], CreateTopics(19): 0 to 3 [usable: 3], DeleteTopics(20): 0 to 2 [usable: 2], DeleteRecords(21): 0 to 1 [usable: 1], InitProducerId(22): 0 to 1 [usable: 1], OffsetForLeaderEpoch(23): 0 to 1 [usable: 1], AddPartitionsToTxn(24): 0 to 1 [usable: 1], AddOffsetsToTxn(25): 0 to 1 [usable: 1], EndTxn(26): 0 to 1 [usable: 1], WriteTxnMarkers(27): 0 [usable: 0], TxnOffsetCommit(28): 0 to 1 [usable: 1], DescribeAcls(29): 0 to 1 [usable: 1], CreateAcls(30): 0 to 1 [usable: 1], DeleteAcls(31): 0 to 1 [usable: 1], DescribeConfigs(32): 0 to 2 [usable: 2], AlterConfigs(33): 0 to 1 [usable: 1], AlterReplicaLogDirs(34): 0 to 1 [usable: 1], DescribeLogDirs(35): 0 to 1 [usable: 1], SaslAuthenticate(36): 0 [usable: 0], CreatePartitions(37): 0 to 1 [usable: 1], CreateDelegationToken(38): 0 to 1 [usable: 1], RenewDelegationToken(39): 0 to 1 [usable: 1], ExpireDelegationToken(40): 0 to 1 [usable: 1], DescribeDelegationToken(41): 0 to 1 [usable: 1], DeleteGroups(42): 0 to 1 [usable: 1]) (org.apache.kafka.clients.NetworkClient)
>>[2023-01-08 03:23:07,357] DEBUG Added sensor with name topic.cleanCsv.records-per-batch (org.apache.kafka.common.metrics.Metrics)
>>>>[2023-01-08 03:23:07,357] DEBUG Added sensor with name topic.cleanCsv.bytes (org.apache.kafka.common.metrics.Metrics)
>>[2023-01-08 03:23:07,358] DEBUG Added sensor with name topic.cleanCsv.compression-rate (org.apache.kafka.common.metrics.Metrics)
>[2023-01-08 03:23:07,358] DEBUG Added sensor with name topic.cleanCsv.record-retries (org.apache.kafka.common.metrics.Metrics)
>>>[2023-01-08 03:23:07,358] DEBUG Added sensor with name topic.cleanCsv.record-errors (org.apache.kafka.common.metrics.Metrics)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>[2023-01-08 03:23:08,645] INFO [Producer clientId=console-producer] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer)
[2023-01-08 03:23:08,645] DEBUG [Producer clientId=console-producer] Beginning shutdown of Kafka producer I/O thread, sending remaining records. (org.apache.kafka.clients.producer.internals.Sender)
[2023-01-08 03:23:08,647] DEBUG Removed sensor with name connections-closed: (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:08,647] DEBUG Removed sensor with name connections-created: (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:08,647] DEBUG Removed sensor with name successful-authentication: (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:08,647] DEBUG Removed sensor with name failed-authentication: (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:08,648] DEBUG Removed sensor with name bytes-sent-received: (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:08,648] DEBUG Removed sensor with name bytes-sent: (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:08,648] DEBUG Removed sensor with name bytes-received: (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:08,649] DEBUG Removed sensor with name select-time: (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:08,649] DEBUG Removed sensor with name io-time: (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:08,649] DEBUG Removed sensor with name node--1.bytes-sent (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:08,649] DEBUG Removed sensor with name node--1.bytes-received (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:08,650] DEBUG Removed sensor with name node--1.latency (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:08,650] DEBUG Removed sensor with name node-1001.bytes-sent (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:08,650] DEBUG Removed sensor with name node-1001.bytes-received (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:08,650] DEBUG Removed sensor with name node-1001.latency (org.apache.kafka.common.metrics.Metrics)
[2023-01-08 03:23:08,654] WARN [Principal=null]: TGT renewal thread has been interrupted and will exit. (org.apache.kafka.common.security.kerberos.KerberosLogin)
[2023-01-08 03:23:08,654] DEBUG [Producer clientId=console-producer] Shutdown of Kafka producer I/O thread has completed. (org.apache.kafka.clients.producer.internals.Sender)
[2023-01-08 03:23:08,655] DEBUG [Producer clientId=console-producer] Kafka producer has been closed (org.apache.kafka.clients.producer.KafkaProducer)

avatar
Explorer

Hey there, 

 

has anybody a solution for this topic? I'm getting similar messages in thousands per Minute on HBase Region Server. Although always only on a single region server, but after a restart of the application of the use case always on another region server. 

 

Regards, Timo

WARN org.apache.zookeeper.Login: TGT renewal thread has been interrupted and will exit. 

 

avatar
Super Collaborator

Hello @Timo@George-Megre 

 

If you are facing an issue after enabling Kerberos and are unable to produce/consume then we would suggest you to please follow the below steps and let us know how it goes: 

 

Make sure that all partitions are in a healthy state using the Kafka describe command or that there is no warning/alerts for Kafka in CM. If there is no alert for Kafka then, please follow the below steps to connect to the Kafka topic: 

 

1) kinit with the keytab and make sure that the user is having required permissions enabled in Ranger 

2) Create jaas.conf file with the contents: 

 

vi /tmp/jaas.conf

 

KafkaClient {

com.sun.security.auth.module.Krb5LoginModule required

useTicketCache=true

renewTicket=true

serviceName="kafka";

};

Client {

com.sun.security.auth.module.Krb5LoginModule required

useTicketCache=true

renewTicket=true

serviceName="zookeeper";

};

 

3) Run the following command

export KAFKA_OPTS="-Djava.security.auth.login.config=/tmp/jaas.conf

 

Note: Make sure and replace jaas.conf complete path. 

 

4)  Create the client.properties file containing the following properties.

vi /tmp/client.properties

 

security.protocol=SASL_PLAINTEXT

sasl.kerberos.service.name=kafka

 

5) Start console producer

kafka-console-producer --broker-list <broker1.test.com:6667,broker2.test.com:6667> --topic <topic-name> --producer.config /tmp/client.properties

 

6) Start console consumer

kafka-console-consumer --bootstrap-server <broker1.test.com:6667,broker2.test.com:6667> --topic <topic-name> --consumer.config /tmp/client.properties --from-beginning

 

Note: Use the complete hostname of the broker,  Also, replace the topic, client.properties name in the above commands. 

 

Please check and f you found this response assisted with your query, please take a moment to log in and click on  KUDOS 🙂 & ”Accept as Solution" below this post.

 

Thank you. 

avatar
Explorer

Thanks for your reply. In our case we're operating the kafka cluster not on CDP. Consuming data from kafka is no problem. Please have a look at this "org.apache.zookeeper.Login: TGT renewal thread ha... - Cloudera Community - 373868

avatar
Cloudera Employee

Log level can also be set at /etc/kafka/conf/tools-log4j.properties