Support Questions

Find answers, ask questions, and share your expertise

Kafka Consumer throwing error message "commit cannot be completed due to group rebalance"


I am getting an exception at the Kafka consumer as shown in the attached screenshot. and once the exception occurs at kafka consumer the flow files are becoming very slow at publishKafka and consumerKafka.

can I get help to understand the reason for causing the issue in Nifi consume kafka. I am not getting these kind of issues when I run kafka consumers standalone from terminal.

I wonder if I am missing some configuration in Nifi. I am also attaching my configuration properties for consume kafka.




Looking at the error message it appears that issue is on Kafka side due to group re-balance. If you google this term you will get lots of useful information on this issue: "CommitFailedException Kafka group rebalance"

I tried get some help from blogs. But I am unable to find the relevant information to resolve the issue.

It would be helpful if any one have configured Consumer Kafka processor in Nifi successfully to process thousands of records per minutes can share the way the processor is ocnfigured!




Can you please try create a topic and then use Kafka Console Producer to post data to that topic. Once done, please use Console Producer to read data to confirm if it works end to end without using NiFi

Rising Star

@Anil Reddy

It looks like large processing time between poll calls ( when processor processes large volume of data) can exceed and cause group rebalancing. One way you can try is increasing the on the brokers side and add increased value of and in the ConsumeKafka configs.

Please let me know if it helps.

Thank you!

We have that flow running fine with out Nifi in our current setup. We are trying to migrate to Nifi, hence build the flow in Nifi using Consume Kafka.


Is your issue resolved? I just wrote a simple NiFi flow and used it to push 5000 records into Kafka and read those 5000 records out of Kafka. I did not get any error.

Nope. issue is not resolved!


@Anil Reddy

Kafka server expects that it receives at least one heartbeat within the session time out. So the consumer tries to do a heartbeat at most (session time out/heartbeat times). Some heartbeats might be missed. So your heartbeat time should not be more than 1/3 of the session time out. ( refer to the docs)

@Geoffrey Shelton Okot

Yes, that makes sense. But I am unable to figureout where can I configure those parameters in ConsumeKafka_0_10 processors