Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

how to increase open files on kafka cluster

how to increase open files on kafka cluster

We have 10 kafka machines with kafka version - 1.X

this kafka cluster version is part of HDP version - 2.6.5

We noticed that under /var/log/kafka/server.log the following message

 

ERROR Error while accepting connection {kafka.network.Accetpr}
java.io.IOException: Too many open files

 

We saw also additionally

 


Broker 21 stopped fetcher for partition ...................... because they are in the failed log dir /kafka/kafka-logs {kafka.server.ReplicaManager}

 

and

 

WARN Received a PartitionLeaderEpoch assignment for an epoch < latestEpoch. this implies messages have arrived out of order. New: {epoch:0, offset:2227488}, Currnet: {epoch 2, offset:261} for Partition: cars-list-75 {kafka.server.epochLeaderEpocHFileCache}

 

so regarding to the issue -

 

ERROR Error while accepting connection {kafka.network.Accetpr}
java.io.IOException: Too many open files

 

 

how to increase the MAX open files , in order to avoid this issue

 

Michael-Bronson
1 REPLY 1
Highlighted

Re: how to increase open files on kafka cluster

Contributor

@mike_bronson7 

 

 

 Kafka broker needs at least the following number of file descriptors to just track log segment files:

 

 

(number of partitions)*(partition size / segment size)

 

 

You can review the current limits configuration under:

 

 

cat /proc/<kafka_pid>/limits

 

 

If you want to change them, if you're using ambari console you can go to > Kafka > config > and search for "kafka_user_nofile_limit"

 

Finally, To see open file descriptors, run:

 

 

lsof -p KAFKA_BROKER_PID

 

 

Don't have an account?
Coming from Hortonworks? Activate your account here