Support Questions

Find answers, ask questions, and share your expertise

Timeout Error When Using kafka-console-consumer and kafka-console-producer On Secured Cluster

avatar
Contributor

I recently installed Kafka onto an already secured cluster. I've configured Kafka to use Kerberos and SSL, and set the protocol to SASL_SSL, roughly following the documentation here (I used certificates already created): https://www.cloudera.com/documentation/kafka/latest/topics/kafka_security.html

 

When I bring up kafka-console-consumer, a few minor log messages come up, and then it sits waiting for messages correctly. When I bring up kafka-console-producer, the same happens. I am pointing both to the same node which is both a Kafka broker and a Zookeeper node, with port 9092 for the produer, and port 2181 for the consumer. If I type something into the console for the producer, however, nothing will happen for a while, and then I will get the following error:

 

17/07/26 13:11:20 ERROR internals.ErrorLoggingCallback: Error when sending message to topic test with key: null, value: 5 bytes with error:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.

 

The Kafka logs in that timeframe don't seem to have any errors or warnings. The Zookeeper logs are also clean except for one warning that shows up only in the log of the zookeeper node I am pointing the consumer to: 

 

2017-07-26 13:10:17,379 WARN org.apache.zookeeper.server.NIOServerCnxn: Exception causing close of session 0x0 due to java.io.EOFException

 

Any ideas on what would cause this behavior or how to further debug what the issue is?

 

1 ACCEPTED SOLUTION

avatar

This is indicating that your jaas.conf references a keytab that needs a password, or you are using ticket cache without doing a kinit before running this command.

Confirm that you are able to connect to the cluster (hdfs dfs -ls /) from the command line first, and then check your jaas.conf based on this documentation:
https://www.cloudera.com/documentation/kafka/latest/topics/kafka_security.html

-pd

View solution in original post

16 REPLIES 16

avatar

This is indicating that your jaas.conf references a keytab that needs a password, or you are using ticket cache without doing a kinit before running this command.

Confirm that you are able to connect to the cluster (hdfs dfs -ls /) from the command line first, and then check your jaas.conf based on this documentation:
https://www.cloudera.com/documentation/kafka/latest/topics/kafka_security.html

-pd

avatar
Contributor

OK, finally got everything working.

 

As for the last error I had been seeing, I had thought for sure my kerberos credentials were still showing up in klist, but this morning when I kinited in, everything worked fine, so that must have been the issue.

 

I then got an error on the consumer side, which I soon realized was because with the new bootstrap-servers parameter, you need to use the same port as the producer (9093 in my case), not the zookeeper port. Once I updated this, everything worked properly.

avatar
Explorer

 hi,

 

i have an issue on kafka, while running the stream from producer to consumer facing an error ,

 


org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms
ERROR Error when sending message to topic binary_kafka_source with key: null, value: 175 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
 
 
please any one can help
 

avatar
Explorer
and
7/11/16 12:23:23 INFO zkclient.ZkClient: zookeeper state changed (Disconnected)
17/11/16 12:23:23 INFO zkclient.ZkClient: zookeeper state changed (Disconnected)
17/11/16 12:23:23 INFO zkclient.ZkClient: zookeeper state changed (Disconnected)
17/11/16 12:23:24 INFO zkclient.ZkClient: zookeeper state changed (SyncConnected)
17/11/16 12:23:24 INFO zkclient.ZkClient: zookeeper state changed (SyncConnected)
17/11/16 12:23:24 INFO zkclient.ZkClient: zookeeper state changed (SyncConnected)

avatar
New Contributor

Hi Team,

 

I am getting below kafka exceptions in log, can anyone help me why we are getting below exceptions?

30 08:10:51.052 [Thread-13] org.apache.kafka.common.KafkaException: Failed to construct kafka producer

30 04:48:04.035 [Thread-1] org.apache.kafka.common.KafkaException: Failed to construct kafka consumer

 

Thank you all your help:

avatar
There isn't enough information here to determine what the problem could be. If you can provide more log entries and your configuration, that may help.

-pd

avatar
New Contributor

I have a very same problem with mcginnda

I try to config kafka broker support PLAINTXT and SSL at the same time,with server.properties config like these:

listeners=PLAINTEXT://test-ip:9092,SSL://test-ip:9093
advertised.listeners=PLAINTEXT://test-ip:9092,SSL://test-ip:9093
advertised.host.name=test-ip
delete.topic.enable=true

ssl.keystore.location=/kafka/ssl/server.keystore.jks
ssl.keystore.password=test1234
ssl.key.password=test1234
ssl.truststore.location=/kafka/ssl/server.truststore.jks
ssl.truststore.password=test1234
ssl.client.auth = required
ssl.enabled.protocols = TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.type=JKS
ssl.truststore.type=JKS
ssl.secure.random.implementation=SHA1PRNG

 

and now, I try to use a consumer client to connect kafka server, but it not work.

 

in server.log, there is a lot of error like this.

[2018-12-20 15:58:42,295] ERROR Processor got uncaught exception. (kafka.network.Processor)
java.lang.ArrayIndexOutOfBoundsException: 18
at org.apache.kafka.common.protocol.ApiKeys.forId(ApiKeys.java:68)
at org.apache.kafka.common.requests.AbstractRequest.getRequest(AbstractRequest.java:39)
at kafka.network.RequestChannel$Request.<init>(RequestChannel.scala:79)
at kafka.network.Processor$$anonfun$run$11.apply(SocketServer.scala:426)
at kafka.network.Processor$$anonfun$run$11.apply(SocketServer.scala:421)
at scala.collection.Iterator$class.foreach(Iterator.scala:742)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.network.Processor.run(SocketServer.scala:421)
at java.lang.Thread.run(Thread.java:748)

 

and consumer DEBUG error like this:

2018-12-20 16:04:08,103 DEBUG ZTE org.apache.kafka.common.network.Selector TransactionID=null InstanceID=null [] Connection with test-ip/110.10.10.100 disconnected [Selector.java] [307]
java.io.EOFException: null
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:99)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:160)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:141)
at org.apache.kafka.common.network.Selector.poll(Selector.java:286)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:270)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:303)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:197)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:187)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:877)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:829)
at com.zte.polling.provider.kafka.KafkaClientProvider$$anonfun$receiveMessage$1$$anonfun$apply$mcV$sp$2.apply(KafkaClientProvider.scala:59)
at com.zte.polling.provider.kafka.KafkaClientProvider$$anonfun$receiveMessage$1$$anonfun$apply$mcV$sp$2.apply(KafkaClientProvider.scala:57)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at com.zte.nfv.core.InfiniteIterate.foreach(InfiniteIterate.scala:4)
at com.zte.polling.provider.kafka.KafkaClientProvider$$anonfun$receiveMessage$1.apply$mcV$sp(KafkaClientProvider.scala:57)
at com.zte.polling.provider.kafka.KafkaClientProvider$$anonfun$receiveMessage$1.apply(KafkaClientProvider.scala:54)
at com.zte.polling.provider.kafka.KafkaClientProvider$$anonfun$receiveMessage$1.apply(KafkaClientProvider.scala:54)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at scala.concurrent.impl.ExecutionContextImpl$$anon$3.exec(ExecutionContextImpl.scala:107)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)