- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
When there are a lot message the mirror maker is not able to consume from the last saved offset
- Labels:
-
Apache Kafka
Created on ‎01-31-2017 01:34 AM - edited ‎09-16-2022 03:59 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I'm having issue with Kafka Mirror Maker. I've stopped the mirror maker for 30 minutes due to a cluster upgrade and at the restart of the cluster the mirror maker is not able to consume data from the source cluster. I see that the lag of the consumer group of the mirror maker is very high so I'm thinking about some parameters to change in order to increase the buffer size of the mirror maker. I've tried changing the consumer group for the mirror maker and in this case this operation allows to restart consuming data from the latest messages. When I try to restart the process from the last saved offsets I see a peak of consumed data but the mirror maker is not able to commit offsets infact the log is blocked at the row: INFO kafka.tools.MirrorMaker$: Committing offsets and no more rows are showev after this one.
I think that the problem is related to the huge amount of data to process. Ive running a cluster with Kafka 0.8.2.1 with this configuration:
auto.offset.reset=largest
offsets.storage=zookeeper
dual.commit.enabled=false
Created ‎01-31-2017 08:06 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
num.streams
num.producers
Increasing the num.streams will increase the number of consumer threads that you have running and increasing num.producers will allow you to produce more messages to the destination in parallel.
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=27846330
-pd
Created ‎02-02-2017 03:18 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, thank you for the responce.
Consumer stream is bounded to the number of the partition of the topic so increasing the number of consumer and producer will not solve the problem. From one partition you can have maximum one consumer for a single consumer group.
I was thinking about to increase the queue size of the mirror maker but this is still not working..
Created ‎11-14-2019 03:48 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
hi,
is this issue resolved. could you help us to know how did you get around from that huge lag problem without loosing data?
Created ‎11-14-2019 08:53 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@bdelpizzo Can you see any error in kafka logs/mirror maker logs? It might be possible that the mirror maker is not able to process messages, because of size of message. If the size of any message is high than configured/default value then it might stuck in queue.
Check for message.max.bytes property
