Member since
08-21-2018
6
Posts
1
Kudos Received
0
Solutions
08-30-2018
02:51 AM
Hi, Follow below steps. 1)change Inter Broker Protocol property to SASL_PLAINTEXT in Cloudera manager kafka configuration. and restart kafka service. 2)create jaas.conf file in your home path /home/userid/ vi jaas.conf KafkaClient { com.sun.security.auth.module.Krb5LoginModule required useTicketCache=false useKeyTab=true serviceName="kafka" StoreKey=true #create new keytab by using principal of same user id and put it in below path keyTab="/home/userid/useridkerberos.keytab" #below replace with your correct principal name principal="userid@REALHOSTNAME.COM" client=true;}; 3)Create client.properties file containing the following properties in the same path /home/userid/ sudo vi client.properties security.protocol=SASL_PLAINTEXT sasl.kerberos.service.name=kafka 4) Login with same user id's login and do kinit kinit -kt useridkerberos.keytab userid@REALMHOSTNAME.COM 5) Cretaing Topics: ---------------- /usr/bin/kafka-topics --create --zookeeper hostname1:2181,hostname2:2181,hostname3:2181/kafka --replication-factor 2 --partitions 2 --topic newtopic1 6) Describing Topics: ------------------ /usr/bin/kafka-topics --describe --zookeeper hostname1:2181,hostname2:2181,hostname3:2181/kafka --topic testtopic1 7) export KAFKA_OPTS="-Djava.security.auth.login.config=/home/userid/jaas.conf" verify it: echo "$KAFKA_OPTS" 😎 Writting message using Producer: -------------------------------- /usr/bin/kafka-console-producer --broker-list brokerhostname1:9092,brokerhostname2:9092 --topic newtopic6 --producer.config client.properties 9) Open duplicate session of smae machine and run consumer command Reading message using Consumer: ------------------------------- export KAFKA_OPTS="-Djava.security.auth.login.config=/home/userid/jaas.conf" /usr/bin/kafka-console-consumer --new-consumer --topic newtopic6 --from-beginning --bootstrap-server brokerhostname1:9092,brokerhostname2:9092 --consumer.config client.properties try it. It may work at your end also!!
... View more
08-23-2018
12:52 AM
HI Rajesh, I am also facing same issue while trying to send message using kafka-producer. I am using 2 brokers in my cluster. I have created topic by logging into one of broker host machine. When I run below command am its not going to message propmpt. when I trype something and enter it gives error. command: /usr/bin/kafka-console-producer --broker-list hostname1:9092,hostname2:9092 --topic testtopic1 Result: 18/08/22 22:51:32 INFO producer.ProducerConfig: ProducerConfig values: compression.type = none metric.reporters = [] metadata.max.age.ms = 300000 metadata.fetch.timeout.ms = 60000 reconnect.backoff.ms = 50 sasl.kerberos.ticket.renew.window.factor = 0.8 bootstrap.servers = [b2brp-cdh-cmsn0.hostanameXXXXXXXX:9092, b2brp-cdh-cmsn1.hostanameXXXXXXXX:9092] retry.backoff.ms = 100 sasl.kerberos.kinit.cmd = /usr/bin/kinit buffer.memory = 33554432 timeout.ms = 30000 key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 ssl.keystore.type = JKS ssl.trustmanager.algorithm = PKIX block.on.buffer.full = false ssl.key.password = null max.block.ms = 60000 sasl.kerberos.min.time.before.relogin = 60000 connections.max.idle.ms = 540000 ssl.truststore.password = null max.in.flight.requests.per.connection = 5 metrics.num.samples = 2 client.id = console-producer ssl.endpoint.identification.algorithm = null ssl.protocol = TLS request.timeout.ms = 1500 ssl.provider = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] acks = 0 batch.size = 16384 ssl.keystore.location = null receive.buffer.bytes = 32768 ssl.cipher.suites = null ssl.truststore.type = JKS security.protocol = PLAINTEXT retries = 3 max.request.size = 1048576 value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer ssl.truststore.location = null ssl.keystore.password = null ssl.keymanager.algorithm = SunX509 metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner send.buffer.bytes = 102400 linger.ms = 1000 18/08/22 22:51:32 INFO utils.AppInfoParser: Kafka version : 0.9.0-kafka-2.0.2 18/08/22 22:51:32 INFO utils.AppInfoParser: Kafka commitId : unknown hi 18/08/22 22:51:40 WARN clients.NetworkClient: Bootstrap broker b2brp-cdh-cmsn1.hostanameXXXXXXXX:9092 disconnected 18/08/22 22:51:41 WARN clients.NetworkClient: Bootstrap broker b2brp-cdh-cmsn0.hostanameXXXXXXXX:9092 disconnected Can I know in which path i can find 2 config files(consumer.properties & producer.properties) Thanks in advance.
... View more
08-22-2018
04:20 AM
1 Kudo
Hi, Issue has resolved and kafka borker is up and running fine now 🙂 I have modified the broker.id value in meta.properties(borker_id=341). This changes I did in all kafka broker machines in the path /var/local/kafka/data. Thanks.
... View more
08-22-2018
01:05 AM
HI, I am using Cloudera CDH 5.14.4 I have added csd file and then downloaded and actiavted kafka parcel. After adding the kafka service I am unable to start the kafka brokers. csd: kafka-1.2.0.jar Kafka parcel: 2.0.2-1.2.0.2.p0.5 I found below error in logfile /var/log/kafka/kafka-broker-hostnameXXX.log 2018-08-22 16:27:07,747 INFO kafka.server.KafkaServer: starting 2018-08-22 16:27:07,754 INFO kafka.server.KafkaServer: Connecting to zookeeper on b2brp-cdh-cedn0.hostanameXXXXXX:2181,b2brp-cdh-cedn1.hostanameXXXXXX:2181,b2brp-cdh-cmsn0.hostanameXXXXXX:2181,b2brp-cdh-cmsn1.hostanameXXXXXX:2181,b2brp-cdh-cmsn2.hostanameXXXXXX:2181 2018-08-22 16:27:07,755 INFO org.I0Itec.zkclient.ZkClient: JAAS File name: /run/cloudera-scm-agent/process/1160-kafka-KAFKA_BROKER/jaas.conf 2018-08-22 16:27:07,757 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=b2brp-cdh-cedn0.hostanameXXXXXX:2181,b2brp-cdh-cedn1.hostanameXXXXXX:2181,b2brp-cdh-cmsn0.hostanameXXXXXX:2181,b2brp-cdh-cmsn1.hostanameXXXXXX:2181,b2brp-cdh-cmsn2.hostanameXXXXXX:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@7a220c9a 2018-08-22 16:27:07,757 INFO org.I0Itec.zkclient.ZkEventThread: Starting ZkClient event thread. 2018-08-22 16:27:07,759 INFO org.I0Itec.zkclient.ZkClient: Waiting for keeper state SaslAuthenticated 2018-08-22 16:27:07,764 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism. 2018-08-22 16:27:07,765 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server b2brp-cdh-cmsn1.hostanameXXXXXX/xx.xx.xx.xx:2181. Will attempt to SASL-authenticate using Login Context section 'Client' 2018-08-22 16:27:07,766 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to b2brp-cdh-cmsn1.hostanameXXXXXX/xx.xx.xx.xx:2181, initiating session 2018-08-22 16:27:07,776 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server b2brp-cdh-cmsn1.hostanameXXXXXX/xx.xx.xx.xx:2181, sessionid = 0x564e90a3c9c9b85, negotiated timeout = 6000 2018-08-22 16:27:07,776 INFO org.I0Itec.zkclient.ZkClient: zookeeper state changed (SyncConnected) 2018-08-22 16:27:07,784 INFO org.I0Itec.zkclient.ZkClient: zookeeper state changed (SaslAuthenticated) 2018-08-22 16:27:07,830 INFO kafka.log.LogManager: Loading logs. 2018-08-22 16:27:07,837 INFO kafka.log.LogManager: Logs loading complete. 2018-08-22 16:27:08,108 INFO kafka.log.LogManager: Starting log cleanup with a period of 300000 ms. 2018-08-22 16:27:08,110 INFO kafka.log.LogManager: Starting log flusher with a default period of 9223372036854775807 ms. 2018-08-22 16:27:08,112 INFO kafka.log.LogCleaner: Starting the log cleaner 2018-08-22 16:27:08,117 INFO kafka.log.LogCleaner: [kafka-log-cleaner-thread-0], Starting 2018-08-22 16:27:08,124 FATAL kafka.server.KafkaServer: Fatal error during KafkaServer startup. Prepare to shutdown kafka.common.InconsistentBrokerIdException: Configured broker.id 341 doesn't match stored broker.id 186 in meta.properties. If you moved your data, make sure your configured broker.id matches. If you intend to create a new broker, you should remove all data in your data directories (log.dirs). at kafka.server.KafkaServer.getBrokerId(KafkaServer.scala:635) at kafka.server.KafkaServer.startup(KafkaServer.scala:184) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37) at kafka.Kafka$.main(Kafka.scala:67) at com.cloudera.kafka.wrap.Kafka$.main(Kafka.scala:76) at com.cloudera.kafka.wrap.Kafka.main(Kafka.scala) 2018-08-22 16:27:08,126 INFO kafka.server.KafkaServer: shutting down 2018-08-22 16:27:08,129 INFO kafka.log.LogManager: Shutting down. 2018-08-22 16:27:08,130 INFO kafka.log.LogCleaner: Shutting down the log cleaner. 2018-08-22 16:27:08,130 INFO kafka.log.LogCleaner: [kafka-log-cleaner-thread-0], Shutting down 2018-08-22 16:27:08,131 INFO kafka.log.LogCleaner: [kafka-log-cleaner-thread-0], Stopped 2018-08-22 16:27:08,131 INFO kafka.log.LogCleaner: [kafka-log-cleaner-thread-0], Shutdown completed 2018-08-22 16:27:08,137 INFO kafka.log.LogManager: Shutdown complete. 2018-08-22 16:27:08,138 INFO org.I0Itec.zkclient.ZkEventThread: Terminate ZkClient event thread. 2018-08-22 16:27:08,147 INFO org.apache.zookeeper.ZooKeeper: Session: 0x564e90a3c9c9b85 closed 2018-08-22 16:27:08,147 INFO org.apache.zookeeper.ClientCnxn: EventThread shut down 2018-08-22 16:27:08,149 INFO kafka.server.KafkaServer: shut down completed 2018-08-22 16:27:08,150 FATAL kafka.server.KafkaServerStartable: Fatal error during KafkaServerStartable startup. Prepare to shutdown kafka.common.InconsistentBrokerIdException: Configured broker.id 341 doesn't match stored broker.id 186 in meta.properties. If you moved your data, make sure your configured broker.id matches. If you intend to create a new broker, you should remove all data in your data directories (log.dirs). at kafka.server.KafkaServer.getBrokerId(KafkaServer.scala:635) at kafka.server.KafkaServer.startup(KafkaServer.scala:184) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37) at kafka.Kafka$.main(Kafka.scala:67) at com.cloudera.kafka.wrap.Kafka$.main(Kafka.scala:76) at com.cloudera.kafka.wrap.Kafka.main(Kafka.scala) 2018-08-22 16:27:08,150 INFO kafka.server.KafkaServer: shutting down Can anyone please help me to resolve this issue. Thanks in advance.
... View more
Labels:
- Labels:
-
Apache Kafka