Support Questions

Find answers, ask questions, and share your expertise

Kafka logs folder is growing for more than 100 G in 24 hours.

avatar
New Contributor

Hi,

I've a single Kafka node configured which receives data from telegraf agents and then it is passed to Influx Database.

I've a log retention set for 1h in kafka.server.properties file as - log.retention.hours=1

But Kafka is having log queues formed and log getting full in 24 hours of time: 

 

kafka/bin/kafka-run-class.sh kafka.admin.ConsumerGroupCommand --bootstrap-server localhost:9101 --describe --group kafka_consumer --command-config admin.props | awk '/<topic>/{print $5}'

 

-20333796490

 

Is it because of the throughput when logs can grow more than for an hour if not consumed?

In another environment where the load is even more, this issue is not observed.

Any suggestion?

1 REPLY 1

avatar
Expert Contributor

Hi @danurag 

 

 It's recommended to set up retention at the topic level (unless you want all your topics to use 24 hours by default), example:

 
kafka-configs --bootstrap-server <brokerHost:brokerPort> --alter --entity-type topics --entity-name <topicName> --add-config retention.ms=3600000
 

The most common configuration for how long Kafka will retain messages is by time. The default is specified in the configuration file using the log.retention.hours parameter, and it is set to 168 hours, or one week. However, there are two other parameters allowed, log.retention.minutes and log.retention.ms. All three of

these control the same goal (the amount of time after which messages may be deleted) but the recommended parameter to use is log.retention.ms, as the smaller unit size will take precedence if more than one is specified. This will make sure that the value set for log.retention.ms is always the one used. If more than one is specified, the smaller unit size will take precedence.