Support Questions
Find answers, ask questions, and share your expertise
Check out our newest addition to the community, the Cloudera Innovation Accelerator group hub.

kafka consumer error


I am getting below error while I am consuming message from kafka broker, can someone please suggest what I am doing wrong or i am missing, I have put the steps i am following to create a topi , produce a message and then consume the message (FYI this is on HDP 2.5.5, and kafka 0.10.x)

export BK="node1:6667,node1:6667,node1:6667"

export ZK="zk1:2181,zk1:2181,zk1:2181"

Created a topic: Kinit to kafka user bin/ --create --zookeeper zk1:2181,zk1:2181,zk1:2181 --replication-factor 3 --partitions 1 --topic test3

List the topics: bin/ --list --zookeeper zk1:2181,zk1:2181,zk1:2181 localhost:2181 Produce a message on a topic: bin/ --broker-list $BK --topic test3 I can produce message or with port 9092 bin/ --broker-list node1:9092,node2:9092,node2:9092 --topic test3

Consume the message: bin/ --zookeeper $ZK --bootstrap-server $BK --topic test3 --from-beginning also tried with –security-protocol PLAINTEXTSASL getting error : [2017-06-21 02:09:09,620] WARN Could not login: the client is being asked for a password, but the Zookeeper client code does not currently support obtaining a password from the user. Make sure that the client is configured to use a ticket cache (using the JAAS configuration setting 'useTicketCache=true)' and restart the client. If you still get this message after that, the TGT in the ticket cache has expired and must be manually refreshed. To do so, first determine if you are using a password or a keytab. If the former, run kinit in a Unix shell in the environment of the user who is running this Zookeeper client using the command 'kinit ' (where is the name of the client's Kerberos principal). If the latter, do 'kinit -k -t ' (where is the name of the Kerberos principal, and is the location of the keytab file). After manually refreshing your cache, restart this client. If you continue to see this message after manually refreshing your cache, ensure that your KDC host's clock is in sync with this host's clock. (org.apache.zookeeper.client.ZooKeeperSaslClient) [2017-06-21 02:09:09,622] WARN SASL configuration failed: No password provided Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. (org.apache.zookeeper.ClientCnxn) No brokers found in ZK.



Hello @sbx hadoop ,

could you please proved additional info about how you configured your Kafka brokers ? Are there listeners for PLAINTEXT and/or SASL_PLAINTEXT , and which one relates to which port...since you used in your example 6667 and 9092

If e.g. port 9092 is configured for kerberized access, you also have to provide a proper JAAS config for starting the kafka-console-consumer (in addition to add parameter "--security-protocol" ). Assuming your user already has a valid kerberos ticket you can use that for auth =>


KafkaClient { required useTicketCache=true;};

This JAAS file needs to be added to your env. by exporting it, that the consumer can use it later on, e.g.:


I am not sure if it is just a typo here, but you do not need to specify the same broker/zookeeper 3 times, this is useless 😉 (see e.g. --zookeeper property, the exact same value is specified 3 times...)

HTH, Gerd