Support Questions

Find answers, ask questions, and share your expertise

kafka + topics are not balanced

avatar




Hi all


We have strange behavior in our Kafka cluster


When we run the described info as


/usr/hdp/current/kafka-broker/bin/kafka-topics.sh –zookeeperzookeper_server:2181 –describe



We get for example ( this is the scenario on all topics )


We get on all topic (-1) and Isr is empty not as should be


Topic:__consumer_offsets           PartitionCount:50             ReplicationFactor:3               Configs:segment.bytes=1138822,cleanup.policy=compact,compression.type=producer
               Topic: __consumer_offsets          Partition: 0         Leader: -1           Replicas: 1000,1002,1001                Isr: 
               Topic: __consumer_offsets          Partition: 1         Leader: -1           Replicas: 1000,1002,1001                Isr: 
               Topic: __consumer_offsets          Partition: 2         Leader: -1           Replicas: 1000,1002,1001                Isr: 
               Topic: __consumer_offsets          Partition: 3         Leader: -1           Replicas: 1000,1002,1001                Isr: 
               Topic: __consumer_offsets          Partition: 4         Leader: -1           Replicas: 1000,1002,1001                Isr:

Topic:gen_topic_totCount:100    ReplicationFactor:3         Configs:
               Topic: gen_topic_tot: 0  Leader: -1           Replicas: 1002,1000,1001             Isr: 
               Topic: gen_topic_tot: 1  Leader: -1           Replicas: 1000,1001,1002             Isr: 
               Topic: gen_topic_tot: 2  Leader: -1           Replicas: 1001,1002,1000             Isr:

This happens after 2 days after we restart the zookeeper and kafka

we also try to re-balanced the topics but again after ~20 hours topics return to be not balanced with (-1) as this situation

First all topics was balanced and Isr was with correctly brokers ids

But after more than 24 hours we get this state


From zookeeper logs we not saw something wrong and the same from server.log from kafka , and kafka brokers are up )



1000 , 1001 , 1002 - are the brokers ids






Michael-Bronson
1 REPLY 1

avatar
Expert Contributor

Hi Mike, 

 

The controller is responsible for administrative operations, including assigning partitions to brokers and monitoring for broker failures.

 

If we are getting no leaders assigned, one thing you can check is the controller status in the zookeeper cli.

 

To perform ZooKeeper CLI operations you can use the ZooKeeper client by "bin/zkCli.sh" after that execute: get /controller and see if the output is showing an active controller. If the output is showing "null" you can run rmr /controller, this will trigger a new controller assignment. Finally, make sure you don't have authorization issues by checking the server.log files during broker restart.

 

Regards,

Manuel.