Reply
Cloudera Employee
Posts: 250
Registered: ‎01-09-2014

Re: Timeout Error When Using kafka-console-consumer and kafka-console-producer On Secured Cluster

[ Edited ]

This is indicating that your jaas.conf references a keytab that needs a password, or you are using ticket cache without doing a kinit before running this command.

Confirm that you are able to connect to the cluster (hdfs dfs -ls /) from the command line first, and then check your jaas.conf based on this documentation:
https://www.cloudera.com/documentation/kafka/latest/topics/kafka_security.html

-pd

Explorer
Posts: 11
Registered: ‎07-17-2017

Re: Timeout Error When Using kafka-console-consumer and kafka-console-producer On Secured Cluster

OK, finally got everything working.

 

As for the last error I had been seeing, I had thought for sure my kerberos credentials were still showing up in klist, but this morning when I kinited in, everything worked fine, so that must have been the issue.

 

I then got an error on the consumer side, which I soon realized was because with the new bootstrap-servers parameter, you need to use the same port as the producer (9093 in my case), not the zookeeper port. Once I updated this, everything worked properly.

New Contributor
Posts: 5
Registered: ‎11-16-2017

Re: Timeout Error When Using kafka-console-consumer and kafka-console-producer On Secured Cluster

 hi,

 

i have an issue on kafka, while running the stream from producer to consumer facing an error ,

 


org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms
ERROR Error when sending message to topic binary_kafka_source with key: null, value: 175 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
 
 
please any one can help
 
New Contributor
Posts: 5
Registered: ‎11-16-2017

Re: Timeout Error When Using kafka-console-consumer and kafka-console-producer On Secured Cluster

and
7/11/16 12:23:23 INFO zkclient.ZkClient: zookeeper state changed (Disconnected)
17/11/16 12:23:23 INFO zkclient.ZkClient: zookeeper state changed (Disconnected)
17/11/16 12:23:23 INFO zkclient.ZkClient: zookeeper state changed (Disconnected)
17/11/16 12:23:24 INFO zkclient.ZkClient: zookeeper state changed (SyncConnected)
17/11/16 12:23:24 INFO zkclient.ZkClient: zookeeper state changed (SyncConnected)
17/11/16 12:23:24 INFO zkclient.ZkClient: zookeeper state changed (SyncConnected)
New Contributor
Posts: 1
Registered: ‎03-30-2018

Re: Timeout Error When Using kafka-console-consumer and kafka-console-producer On Secured Cluster

Hi Team,

 

I am getting below kafka exceptions in log, can anyone help me why we are getting below exceptions?

30 08:10:51.052 [Thread-13] org.apache.kafka.common.KafkaException: Failed to construct kafka producer

30 04:48:04.035 [Thread-1] org.apache.kafka.common.KafkaException: Failed to construct kafka consumer

 

Thank you all your help:

Cloudera Employee
Posts: 250
Registered: ‎01-09-2014

Re: Timeout Error When Using kafka-console-consumer and kafka-console-producer On Secured Cluster

There isn't enough information here to determine what the problem could be. If you can provide more log entries and your configuration, that may help.

-pd
New Contributor
Posts: 1
Registered: ‎12-20-2018

Re: Timeout Error When Using kafka-console-consumer and kafka-console-producer On Secured Cluster

I have a very same problem with mcginnda

I try to config kafka broker support PLAINTXT and SSL at the same time,with server.properties config like these:

listeners=PLAINTEXT://test-ip:9092,SSL://test-ip:9093
advertised.listeners=PLAINTEXT://test-ip:9092,SSL://test-ip:9093
advertised.host.name=test-ip
delete.topic.enable=true

ssl.keystore.location=/kafka/ssl/server.keystore.jks
ssl.keystore.password=test1234
ssl.key.password=test1234
ssl.truststore.location=/kafka/ssl/server.truststore.jks
ssl.truststore.password=test1234
ssl.client.auth = required
ssl.enabled.protocols = TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.type=JKS
ssl.truststore.type=JKS
ssl.secure.random.implementation=SHA1PRNG

 

and now, I try to use a consumer client to connect kafka server, but it not work.

 

in server.log, there is a lot of error like this.

[2018-12-20 15:58:42,295] ERROR Processor got uncaught exception. (kafka.network.Processor)
java.lang.ArrayIndexOutOfBoundsException: 18
at org.apache.kafka.common.protocol.ApiKeys.forId(ApiKeys.java:68)
at org.apache.kafka.common.requests.AbstractRequest.getRequest(AbstractRequest.java:39)
at kafka.network.RequestChannel$Request.<init>(RequestChannel.scala:79)
at kafka.network.Processor$$anonfun$run$11.apply(SocketServer.scala:426)
at kafka.network.Processor$$anonfun$run$11.apply(SocketServer.scala:421)
at scala.collection.Iterator$class.foreach(Iterator.scala:742)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.network.Processor.run(SocketServer.scala:421)
at java.lang.Thread.run(Thread.java:748)

 

and consumer DEBUG error like this:

2018-12-20 16:04:08,103 DEBUG ZTE org.apache.kafka.common.network.Selector TransactionID=null InstanceID=null [] Connection with test-ip/110.10.10.100 disconnected [Selector.java] [307]
java.io.EOFException: null
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:99)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:160)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:141)
at org.apache.kafka.common.network.Selector.poll(Selector.java:286)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:270)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:303)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:197)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:187)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:877)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:829)
at com.zte.polling.provider.kafka.KafkaClientProvider$$anonfun$receiveMessage$1$$anonfun$apply$mcV$sp$2.apply(KafkaClientProvider.scala:59)
at com.zte.polling.provider.kafka.KafkaClientProvider$$anonfun$receiveMessage$1$$anonfun$apply$mcV$sp$2.apply(KafkaClientProvider.scala:57)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at com.zte.nfv.core.InfiniteIterate.foreach(InfiniteIterate.scala:4)
at com.zte.polling.provider.kafka.KafkaClientProvider$$anonfun$receiveMessage$1.apply$mcV$sp(KafkaClientProvider.scala:57)
at com.zte.polling.provider.kafka.KafkaClientProvider$$anonfun$receiveMessage$1.apply(KafkaClientProvider.scala:54)
at com.zte.polling.provider.kafka.KafkaClientProvider$$anonfun$receiveMessage$1.apply(KafkaClientProvider.scala:54)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at scala.concurrent.impl.ExecutionContextImpl$$anon$3.exec(ExecutionContextImpl.scala:107)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

 

Announcements
New solutions