Support Questions

Find answers, ask questions, and share your expertise

Error in kafka consumer

avatar
Expert Contributor

Hi, 

 

Has anyone seen this error please let me know. 

 

2018-05-21 22:56:20,126 INFO adPoolTaskExecutor-1 s.consumer.internals.AbstractCoordinator - Discovered coordinator ss879.xxx.xxx.xxx.com:9092 (id: 2144756551 rack: null) for group prod-abc-events.
2018-05-21 22:56:20,126 INFO adPoolTaskExecutor-1 s.consumer.internals.AbstractCoordinator - (Re-)joining group prod-abc-events

2018-05-21 22:56:20,126 INFO adPoolTaskExecutor-1 s.consumer.internals.AbstractCoordinator - Marking the coordinator ss879.xxx.xxx.xxx.com:9092 (id: 2144756551 rack: null) dead for group prod-abc-events

 

13 REPLIES 13

avatar
Expert Contributor

we have fixed this issue. 

avatar
New Contributor
I am able to connect to kafka manually through terminal providing details like topic, seed broker and schema url. But when I try to run automation and use ruby-kafka gem to connect with exactly same details I get the following error:
 Failed to find coordinator (Kafka::Error)
      ./features/lib/uevents/kafka_consumer.rb:33:in `block in initialize'

avatar
Explorer

I am happy you fixed the issue, but next time you might consider writing some details about how you get out of that trouble situation as others might be in same situation as well 🙂

avatar
Expert Contributor

This was an issue with that consumer group in __consumer_offsets adn these were the steps we did to fix this issue

 

On a single broker run the below 

 

1) find /kafka/data -name "*.log" | grep -i consumer | awk '{a=$1;b="kafka-run-class kafka.tools.DumpLogSegments --deep-iteration --print-data-log -files "a; print b}'

 

Now run each and every command on xxxx broker to see which log file has consumer group "prod-abc-events" 

 

2) kafka-run-class kafka.tools.DumpLogSegments --deep-iteration --print-data-log -files  /kafka/data/sdc/__consumer_offsets-24/00000000000000000000.log  | grep -i 'prod-abc-events

 

Do steps above on all the brokers and make a list of all the files that have 'prod-abc-events' . In our instance we found 3 files that refrenced this group "prod-abc-events' 

 

broker1:
/kafka/data/sda/__consumer_offsets-24/00000000000000000000.log
 
broker2:
/kafka/data/sdc/__consumer_offsets-24/00000000000000000000.log
 
broker3:
/kafka/data/sdc/__consumer_offsets-24/00000000000000000000.log
 
We noticed that the .log file on broker1 was different in size and content from the remaining two. 
 
We backed up the file from broker1 and then replaced it with the one from broker2 . and that has resolved this issue. 
 
Most likely this happened to us when we ran kafka-reassign-partitions and drives reached 99% and then something broke in _consumer_offsets.