Created on 11-22-2017 05:54 AM - edited 09-16-2022 05:33 AM
Hi,
We have recently started using kafka 0.10.2 but are unable to produce any messages or consumer them. It has kerberos enabled. Below are my configs. There is no error and kafka data log also doesn't have any entry but the index gets updated whenever we run an producer.
kafka-console-producer --broker-list kafka1.test.com:9092,kafka2.test.com:9092 --producer.config client.properties --topic TEST
kafka-console-consumer --topic TEST --from-beginning --bootstrap-server kafka1.test.com:9092,kafka2.test.com:9092 --consumer.config consumer.properties
jass:
KafkaClient { com.sun.security.auth.module.Krb5LoginModule required useTicketCache=true; };
client.properties/consumer.properties:
security.protocol=SASL_PLAINTEXT
sasl.kerberos.service.name=kafka
17/11/22 12:43:01 ERROR internals.ErrorLoggingCallback: Error when sending message to topic TEST with key: null, value: 4 bytes with error: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
17/11/22 12:44:01 ERROR internals.ErrorLoggingCallback: Error when sending message to topic TEST with key: null, value: 2 bytes with error: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
17/11/22 12:45:01 ERROR internals.ErrorLoggingCallback: Error when sending message to topic TEST with key: null, value: 5 bytes with error: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
17/11/22 12:46:01 ERROR internals.ErrorLoggingCallback: Error when sending message to topic TEST with key: null, value: 4 bytes with error: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
Created 11-26-2017 12:19 PM
Created on 08-30-2018 02:51 AM - edited 08-30-2018 02:57 AM
Hi,
Follow below steps.
1)change Inter Broker Protocol property to SASL_PLAINTEXT in Cloudera manager kafka configuration. and restart kafka service.
2)create jaas.conf file in your home path /home/userid/
vi jaas.conf
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=false
useKeyTab=true
serviceName="kafka"
StoreKey=true
#create new keytab by using principal of same user id and put it in below path
keyTab="/home/userid/useridkerberos.keytab"
#below replace with your correct principal name
principal="userid@REALHOSTNAME.COM"
client=true;};
3)Create client.properties file containing the following properties in the same path /home/userid/
sudo vi client.properties
security.protocol=SASL_PLAINTEXT
sasl.kerberos.service.name=kafka
4) Login with same user id's login and do kinit
kinit -kt useridkerberos.keytab userid@REALMHOSTNAME.COM
5) Cretaing Topics:
----------------
/usr/bin/kafka-topics --create --zookeeper hostname1:2181,hostname2:2181,hostname3:2181/kafka --replication-factor 2 --partitions 2 --topic newtopic1
6) Describing Topics:
------------------
/usr/bin/kafka-topics --describe --zookeeper hostname1:2181,hostname2:2181,hostname3:2181/kafka --topic testtopic1
7) export KAFKA_OPTS="-Djava.security.auth.login.config=/home/userid/jaas.conf"
verify it: echo "$KAFKA_OPTS"
😎 Writting message using Producer:
--------------------------------
/usr/bin/kafka-console-producer --broker-list brokerhostname1:9092,brokerhostname2:9092 --topic newtopic6 --producer.config client.properties
9) Open duplicate session of smae machine and run consumer command
Reading message using Consumer:
-------------------------------
export KAFKA_OPTS="-Djava.security.auth.login.config=/home/userid/jaas.conf"
/usr/bin/kafka-console-consumer --new-consumer --topic newtopic6 --from-beginning --bootstrap-server brokerhostname1:9092,brokerhostname2:9092 --consumer.config client.properties
try it. It may work at your end also!!
Created 08-30-2018 04:46 AM
poojary_sudth:
Thank you, but that is +- the same set of tasks I did before.
I found how to make consumer work, it was necessary to add parameter --partition 0:
KAFKA_OPTS="-Djava.security.auth.login.config=/root/jaas.conf" kafka-console-consumer --bootstrap-server ourHost:9092 --topic test --consumer.config /root/client.properties --partition 0
I cannot see all the messages comming into the topic, but at least some of them which fall into specified partition are printed. Which is enought for me to confirm that Kafka broker works.
I found this hint here: https://stackoverflow.com/questions/34844209/consumer-not-receiving-messages-kafka-console-new-consu...