Support Questions
Find answers, ask questions, and share your expertise

kafka-console-consumer suggests bootstrap-server but works with zookeeper

Expert Contributor

I created a topic named erkan_deneme

With the foolowing commanda I send some messsages to my topic:

/usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list hadooptest01.datalonga.com:6667 --topic erkan_deneme

I tried to receive messages from topic erkan_deneme

/usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --bootstrap-server hadooptest01.datalonga.com:6667 --topic erkan_deneme --from-beginning

I couldn't get any messages. No warnings, no errors, no errors in kafka logs. I was gonna mad but I realized that I did not created any Ranger Policy on erkan_deneme topic. Then I created Ranger policy on that topic and user. But It didn't work either. Then I tried following command:

/usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --zookeeper hadooptest01.datalonga.com:2181 --topic erkan_deneme --from-beginning

I received messages.

The question is; Why I had to use zookeeper instead of bootstrap-server for kafka-console-consumer although deprecation warning not to use zookeeper.

It is so wierd not working with bootstrap-server.

5 REPLIES 5

Rising Star

@Erkan ŞİRİN

May I know what Kafka version you are on? Old Consumer API used to use zookeeper to store offsets but in recent versions we do have option to enable dual commit to commit offsets to Kafka as well as zookeeper.

Also could you please share your server.properties for a quick review.

Thanks!

Expert Contributor

Hi @dbains Kafka version is 0.10.1

# Generated by Apache Ambari. Wed Jun 13 11:53:45 2018
    
advertised.listeners=PLAINTEXT://hadooptest01.datalonga.com:6667
auto.create.topics.enable=true
auto.leader.rebalance.enable=true
broker.rack=/default-rack
compression.type=producer
controlled.shutdown.enable=true
controlled.shutdown.max.retries=3
controlled.shutdown.retry.backoff.ms=5000
controller.message.queue.size=10
controller.socket.timeout.ms=30000
default.replication.factor=1
delete.topic.enable=true
external.kafka.metrics.exclude.prefix=kafka.network.RequestMetrics,kafka.server.DelayedOperationPurgatory,kafka.server.BrokerTopicMetrics.BytesRejectedPerSec,kafka.server.KafkaServer.ClusterId
external.kafka.metrics.include.prefix=kafka.network.RequestMetrics.ResponseQueueTimeMs.request.OffsetCommit.98percentile,kafka.network.RequestMetrics.ResponseQueueTimeMs.request.Offsets.95percentile,kafka.network.RequestMetrics.ResponseSendTimeMs.request.Fetch.95percentile,kafka.network.RequestMetrics.RequestsPerSec.request
fetch.purgatory.purge.interval.requests=10000
kafka.ganglia.metrics.group=kafka
kafka.ganglia.metrics.host=localhost
kafka.ganglia.metrics.port=8671
kafka.ganglia.metrics.reporter.enabled=true
kafka.metrics.reporters=org.apache.hadoop.metrics2.sink.kafka.KafkaTimelineMetricsReporter
kafka.timeline.metrics.hosts=hadooptest03.datalonga.com
kafka.timeline.metrics.maxRowCacheSize=10000
kafka.timeline.metrics.port=6188
kafka.timeline.metrics.protocol=http
kafka.timeline.metrics.reporter.enabled=true
kafka.timeline.metrics.reporter.sendInterval=5900
kafka.timeline.metrics.truststore.password=******
kafka.timeline.metrics.truststore.path=/etc/security/clientKeys/all.jks
kafka.timeline.metrics.truststore.type=jks
leader.imbalance.check.interval.seconds=300
leader.imbalance.per.broker.percentage=10
listeners=PLAINTEXT://0.0.0.0:6667
log.cleanup.interval.mins=10
log.dirs=/data/01/kafka/kafka-logs
log.index.interval.bytes=4096
log.index.size.max.bytes=10485760
log.retention.bytes=-1
log.retention.hours=168
log.roll.hours=168
log.segment.bytes=1073741824
message.max.bytes=1000000
min.insync.replicas=1
num.io.threads=8
num.network.threads=3
num.partitions=1
num.recovery.threads.per.data.dir=1
num.replica.fetchers=1
offset.metadata.max.bytes=4096
offsets.commit.required.acks=-1
offsets.commit.timeout.ms=5000
offsets.load.buffer.size=5242880
offsets.retention.check.interval.ms=600000
offsets.retention.minutes=86400000
offsets.topic.compression.codec=0
offsets.topic.num.partitions=50
offsets.topic.replication.factor=3
offsets.topic.segment.bytes=104857600
port=6667
producer.purgatory.purge.interval.requests=10000
queued.max.requests=500
replica.fetch.max.bytes=1048576
replica.fetch.min.bytes=1
replica.fetch.wait.max.ms=500
replica.high.watermark.checkpoint.interval.ms=5000
replica.lag.max.messages=4000
replica.lag.time.max.ms=10000
replica.socket.receive.buffer.bytes=65536
replica.socket.timeout.ms=30000
sasl.enabled.mechanisms=GSSAPI
sasl.mechanism.inter.broker.protocol=GSSAPI
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
socket.send.buffer.bytes=102400
zookeeper.connect=hadooptest03.datalonga.com:2181,hadooptest02.datalonga.com:2181,hadooptest01.datalonga.com:2181
zookeeper.connection.timeout.ms=25000
zookeeper.session.timeout.ms=30000
zookeeper.sync.time.ms=2000
    

Rising Star

@Erkan ŞİRİN Can you please try running the following command:

/usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --bootstrap-server hadooptest01.datalonga.com:6667 --topic erkan_deneme --new-consumer --from-beginning

Expert Contributor

I have the same problem. Did you solve it?

Mentor

@Ruslan Fialkovsky

Can you open a new thread and share the problems you are encountering so you can get help, I am afraid this thread is not active.

HTH