Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Controlling size of the kafka.out log file.

avatar

Kafka error logs are getting filled bringing the kafka down.Looking for options to purge the old kafka errors logs.

Logs that are getting fillled are server.log.**. kafka.out.

1 ACCEPTED SOLUTION

avatar

Let's clarify some confusion - we're not talking about Kafka data logs, but rather logging of the Kafka broker process itself. So, your logs are getting big? Several solutions, based on your appetite for coding and linux admin automation.

Reduce Logging Output to WARN

Kafka broker is quite chatty about client connections, and that fills up logs quickly. Update the logger level for 'server.log' to write only WARN and above. E.g. in Ambari, go to Kafka -> Config -> Advanced kafka-log4j section. Scroll down and find the log4j.logger.kafka entry, modify the level to be WARN:

338-screenshot.png

Change hourly logs to daily and rotate

By default, those logs in question will create hourly files. If you pair it with some external rotation/deletion policy, might want to switch to daily logs instead of hourly. In the same section above, find and update the log4j.appender.kafkaAppender.DatePattern . See the reference docs for syntax: https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/DailyRollingFileAppender.html

Use LogRotate

Research http://linuxcommand.org/man_pages/logrotate8.html . Powerful, but given that you already have log4j in Kafka, might be redundant. An option if you want to go more on the admin side than app dev/ops.

Use an enhanced Log4j rolling appender with MaxBackupIndex

The version of log4j shipped by default doesn't support MaxBackupIndex attribute on the DailyRollingFileAppender. You can, however, find it in many libraries on the internet or quickly compile it yourself from e.g. this: http://wiki.apache.org/logging-log4j/DailyRollingFileAppender . Once you drop the extra jar in Kafka's lib directory, you could add the log4j.logger.kafka.MaxBackupIndex attribute in the config to specify how many of those log files to keep around.

WARNING: using the MaxBackupIndex also means logs will be lost if not picked up in time.

View solution in original post

5 REPLIES 5

avatar
Master Mentor

@jramakrishna@hortonworks.com

Please see this

log.retention.bytes -1

The amount of data to retain in the log for each topic-partitions. Note that this is the limit per-partition so multiply by the number of partitions to get the total data retained for the topic. Also note that if both log.retention.hours and log.retention.bytes are both set we delete a segment when either limit is exceeded. This setting can be overridden on a per-topic basis (see the per-topic configuration section).

avatar

Hi Neeraj.I am mainly looking at log4j stuff.

avatar
Master Mentor

@jramakrishnan@hortonworks.com you need to change the following log4j property

339-img1.png

avatar

Let's clarify some confusion - we're not talking about Kafka data logs, but rather logging of the Kafka broker process itself. So, your logs are getting big? Several solutions, based on your appetite for coding and linux admin automation.

Reduce Logging Output to WARN

Kafka broker is quite chatty about client connections, and that fills up logs quickly. Update the logger level for 'server.log' to write only WARN and above. E.g. in Ambari, go to Kafka -> Config -> Advanced kafka-log4j section. Scroll down and find the log4j.logger.kafka entry, modify the level to be WARN:

338-screenshot.png

Change hourly logs to daily and rotate

By default, those logs in question will create hourly files. If you pair it with some external rotation/deletion policy, might want to switch to daily logs instead of hourly. In the same section above, find and update the log4j.appender.kafkaAppender.DatePattern . See the reference docs for syntax: https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/DailyRollingFileAppender.html

Use LogRotate

Research http://linuxcommand.org/man_pages/logrotate8.html . Powerful, but given that you already have log4j in Kafka, might be redundant. An option if you want to go more on the admin side than app dev/ops.

Use an enhanced Log4j rolling appender with MaxBackupIndex

The version of log4j shipped by default doesn't support MaxBackupIndex attribute on the DailyRollingFileAppender. You can, however, find it in many libraries on the internet or quickly compile it yourself from e.g. this: http://wiki.apache.org/logging-log4j/DailyRollingFileAppender . Once you drop the extra jar in Kafka's lib directory, you could add the log4j.logger.kafka.MaxBackupIndex attribute in the config to specify how many of those log files to keep around.

WARNING: using the MaxBackupIndex also means logs will be lost if not picked up in time.

avatar

Thanks a lot for all the pointers.