Support Questions

Find answers, ask questions, and share your expertise

Timeout Error When Using kafka-console-consumer and kafka-console-producer On Secured Cluster

avatar
Contributor

I recently installed Kafka onto an already secured cluster. I've configured Kafka to use Kerberos and SSL, and set the protocol to SASL_SSL, roughly following the documentation here (I used certificates already created): https://www.cloudera.com/documentation/kafka/latest/topics/kafka_security.html

 

When I bring up kafka-console-consumer, a few minor log messages come up, and then it sits waiting for messages correctly. When I bring up kafka-console-producer, the same happens. I am pointing both to the same node which is both a Kafka broker and a Zookeeper node, with port 9092 for the produer, and port 2181 for the consumer. If I type something into the console for the producer, however, nothing will happen for a while, and then I will get the following error:

 

17/07/26 13:11:20 ERROR internals.ErrorLoggingCallback: Error when sending message to topic test with key: null, value: 5 bytes with error:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.

 

The Kafka logs in that timeframe don't seem to have any errors or warnings. The Zookeeper logs are also clean except for one warning that shows up only in the log of the zookeeper node I am pointing the consumer to: 

 

2017-07-26 13:10:17,379 WARN org.apache.zookeeper.server.NIOServerCnxn: Exception causing close of session 0x0 due to java.io.EOFException

 

Any ideas on what would cause this behavior or how to further debug what the issue is?

 

1 ACCEPTED SOLUTION

avatar

This is indicating that your jaas.conf references a keytab that needs a password, or you are using ticket cache without doing a kinit before running this command.

Confirm that you are able to connect to the cluster (hdfs dfs -ls /) from the command line first, and then check your jaas.conf based on this documentation:
https://www.cloudera.com/documentation/kafka/latest/topics/kafka_security.html

-pd

View solution in original post

16 REPLIES 16

avatar
Try turning on DEBUG for the client commands. If you have a kafka gateway installed on that node, you can set the DEBUG level in the kafka service, otherwise, modify the /etc/kafka/conf/tools-log4j.properties to set the log level to DEBUG, and then run your producer or consumer.

Some "retryable" errors when security is enabled keep the clients from properly connecting.

-pd

avatar
Contributor

I've set the Kafka Broker Logging Threshold to DEBUG, and am seeing DEBUG statements in the Kafka Broker logs. It obviously puts out a lot of information, but I haven't come across anything that looked to be interesting or useful.

 

This cluster does not have a gateway instance at all.

avatar
You need the DEBUG level set for the clients, not the brokers. This would be in the "Gateway Logging Threshold", or on the system where you are running the console commands by editing /etc/kafka/conf/tools-log4j.properties

-pd

avatar
Contributor

Ah OK, I apologize, I didn't realize the logs were separately controlled.

 

When I enabled that, both consumer and producer come back with errors constantly.

 

The consumer shows the following stack trace as soon as it is started constantly until I close the consumer:

 

17/07/26 14:44:40 DEBUG authenticator.SaslClientAuthenticator: Set SASL client state to SEND_HANDSHAKE_REQUEST
17/07/26 14:44:40 DEBUG authenticator.SaslClientAuthenticator: Creating SaslClient: client=svcnonprodhadoop@<DOMAIN>;service=kafka;serviceHostname=svd0hdatn01.<DOMAIN>;mechs=[GSSAPI]
17/07/26 14:44:40 DEBUG network.Selector: Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 124928, SO_TIMEOUT = 0 to node -1
17/07/26 14:44:40 DEBUG authenticator.SaslClientAuthenticator: Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE
17/07/26 14:44:40 DEBUG clients.NetworkClient: Completed connection to node -1.  Fetching API versions.
17/07/26 14:44:40 DEBUG network.Selector: Connection with svd0hdatn01.<DOMAIN>/10.96.88.42 disconnected
java.io.EOFException
        at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83)
        at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
        at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.receiveResponseOrToken(SaslClientAuthenticator.java:242)
        at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.authenticate(SaslClientAuthenticator.java:166)
        at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:71)
        at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:350)
        at org.apache.kafka.common.network.Selector.poll(Selector.java:303)
        at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:370)
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:226)
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:203)
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:138)
        at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:219)
        at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:196)
        at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:281)
        at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1030)
        at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:996)
        at kafka.consumer.NewShinyConsumer.<init>(BaseConsumer.scala:55)
        at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:69)
        at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:50)
        at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
17/07/26 14:44:40 DEBUG clients.NetworkClient: Node -1 disconnected.
17/07/26 14:44:40 DEBUG clients.NetworkClient: Give up sending metadata request since no node is available
17/07/26 14:44:40 DEBUG clients.NetworkClient: Initialize connection to node -1 for sending metadata request
17/07/26 14:44:40 DEBUG clients.NetworkClient: Initiating connection to node -1 at svd0hdatn01.<DOMAIN>:2181.

 

The producer shows the following log output as soon as any input is given to put into the topic:

 

17/07/26 14:45:43 DEBUG authenticator.SaslClientAuthenticator: Set SASL client state to SEND_HANDSHAKE_REQUEST
17/07/26 14:45:43 DEBUG authenticator.SaslClientAuthenticator: Creating SaslClient: client=svcnonprodhadoop@<DOMAIN>;service=kafka;serviceHostname=svd0hdatn01.<DOMAIN>;mechs=[GSSAPI]
17/07/26 14:45:43 DEBUG network.Selector: Connection with svd0hdatn01.<DOMAIN>/<IP_ADDRESS> disconnected
java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
        at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:51)
        at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:81)
        at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:335)
        at org.apache.kafka.common.network.Selector.poll(Selector.java:303)
        at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:370)
        at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:225)
        at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:126)
        at java.lang.Thread.run(Thread.java:745)
17/07/26 14:45:43 DEBUG clients.NetworkClient: Node -1 disconnected.
17/07/26 14:45:43 DEBUG clients.NetworkClient: Give up sending metadata request since no node is available
17/07/26 14:45:44 DEBUG clients.NetworkClient: Initialize connection to node -1 for sending metadata request
17/07/26 14:45:44 DEBUG clients.NetworkClient: Initiating connection to node -1 at svd0hdatn01.<DOMAIN>:9092.

avatar
Connection refused seems to indicate that this host is not listening on port 9092:

svd0hdatn01.<DOMAIN>:9092

You can see what ports kafka is listening on:
ps -ef |grep kafka
netstat -nap |grep <kafka pid>

-pd

avatar
Contributor

OK, so it looks like that took care of one problem, but there's still a problem with the consumer. Following your instructions, I found that the kafka broker was operating on port 9093, not 9092.

 

Fixing that on the producer then caused the same EOF error to come up as I am seeing on the consumer. 

 

17/07/26 16:04:25 DEBUG authenticator.SaslClientAuthenticator: Set SASL client state to SEND_HANDSHAKE_REQUEST
17/07/26 16:04:25 DEBUG authenticator.SaslClientAuthenticator: Creating SaslClient: client=svcnonprodhadoop@<DOMAIN>;service=kafka;serviceHostname=svd0hdatn01;mechs=[GSSAPI]
17/07/26 16:04:25 DEBUG network.Selector: Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 102400, SO_TIMEOUT = 0 to node -1
17/07/26 16:04:25 DEBUG authenticator.SaslClientAuthenticator: Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE
17/07/26 16:04:25 DEBUG clients.NetworkClient: Completed connection to node -1.  Fetching API versions.
17/07/26 16:04:25 DEBUG network.Selector: Connection with svd0hdatn01/10.96.88.42 disconnected
java.io.EOFException
        at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:99)
        at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
        at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.receiveResponseOrToken(SaslClientAuthenticator.java:242)
        at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.authenticate(SaslClientAuthenticator.java:166)
        at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:71)
        at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:350)
        at org.apache.kafka.common.network.Selector.poll(Selector.java:303)
        at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:370)
        at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:225)
        at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:126)
        at java.lang.Thread.run(Thread.java:745)
17/07/26 16:04:25 DEBUG clients.NetworkClient: Node -1 disconnected.
17/07/26 16:04:25 DEBUG clients.NetworkClient: Give up sending metadata request since no node is available
17/07/26 16:04:25 DEBUG clients.NetworkClient: Initialize connection to node -1 for sending metadata request
17/07/26 16:04:25 DEBUG clients.NetworkClient: Initiating connection to node -1 at svd0hdatn01:9093.

avatar
What is your consumer.config file properties, does it have SASL_SSL as the protocol?

Can you verify if its listening with SSL correctly:
openssl s_client -connect svd0hdatn01:9093

-pd

avatar
Contributor

 

I realized that the client.properties file was using SASL_PLAINTEXT, not SASL_SSL. Updated appropriately. Hitting a new error now, on both producer and consumer. The following error comes up, and then it quits the program. I've verified that jaas.conf is in KAFKA_OPTS properly.

 

org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
        at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:718)
        at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:597)
        at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:579)
        at kafka.consumer.NewShinyConsumer.<init>(BaseConsumer.scala:53)
        at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:69)
        at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:50)
        at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
Caused by: org.apache.kafka.common.KafkaException: javax.security.auth.login.LoginException: Could not login: the client is being asked for a password, but the Kafka client code does not currently support obtaining a password from the user. not available to garner  authentication information from the user
        at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:93)
        at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:109)
        at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:55)
        at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:84)
        at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:657)
        ... 6 more
Caused by: javax.security.auth.login.LoginException: Could not login: the client is being asked for a password, but the Kafka client code does not currently support obtaining a password from the user. not available to garner  authentication information from the user
        at com.sun.security.auth.module.Krb5LoginModule.promptForPass(Krb5LoginModule.java:899)
        at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:719)
        at com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:584)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at javax.security.auth.login.LoginContext.invoke(LoginContext.java:762)
        at javax.security.auth.login.LoginContext.access$000(LoginContext.java:203)
        at javax.security.auth.login.LoginContext$4.run(LoginContext.java:690)
        at javax.security.auth.login.LoginContext$4.run(LoginContext.java:688)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:687)
        at javax.security.auth.login.LoginContext.login(LoginContext.java:595)
        at org.apache.kafka.common.security.authenticator.AbstractLogin.login(AbstractLogin.java:55)
        at org.apache.kafka.common.security.kerberos.KerberosLogin.login(KerberosLogin.java:100)
        at org.apache.kafka.common.security.authenticator.LoginManager.<init>(LoginManager.java:52)
        at org.apache.kafka.common.security.authenticator.LoginManager.acquireLoginManager(LoginManager.java:81)
        at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:85)

 

Also, I ran the command you mentioned above, and everything looks right. SSL handshake read 3151 bytes and wrote 499 bytes using TLS v1.2. If you need more information from it, let me know.

 

EDIT: Realized that the properties file actually was wrong. Updating with relevant information because of this.

avatar
Contributor

After researching this a bit, I tried a few more things, none of which changed the error: