Created 05-27-2024 03:23 AM
Hi,
Currently we have nifi logrotation in such a way that rotation is happening per day for 100mb of size. So, we have approximately 700 to 800 log files per day.
So, my question is if I change the size from 100mb to 500mb /1GB will it affect the performance of Nifi. Or should I rotate logs per hour with 50mb of files size for better performance.
Current log rotation is as follows:
<fileNamePattern>${org.apache.nifi.bootstrap.config.log.dir}/nifi-app_%d{yyyy-MM-dd}.%i.log.zip</fileNamePattern>
<maxFileSize>100MB</maxFileSize>
<!-- keep 30 log files worth of history -->
<maxHistory>30</maxHistory>
Created 05-28-2024 02:02 AM
Log rotation settings have No/Driect impact on NiFi data flow processing performance.
It is just that there should be enough space available on the file system to store the log files.
Thank you
Created 05-28-2024 06:49 AM
@Alexy
100% agree with @ckumar
Matt
Created 06-04-2024 05:24 AM
So to answer your question, After, compressing we are having 2GB of logs files generated per day. Most part is by Nifi_app log files.
Below is the configuration used as of now.
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<fileNamePattern>${org.apache.nifi.bootstrap.config.log.dir}/nifi-app_%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern>
<maxFileSize>500MB</maxFileSize>
<!-- keep 30 log files worth of history -->
<maxHistory>30</maxHistory>
</rollingPolicy>
And we are getting 80-100 files per day.
Created 06-04-2024 09:03 AM
@Alexy
Are you specifically needing to produce so much logging?
What loggers do you have added to your logback.xml?
How many are set to "INFO" level logging?
If you only want to log exceptions, you could change the "INFO" to "WARN" or "ERROR" to greatly reduce mount of INOF logging being produced.
As far as NiFi performance goes, it is all about managing CPU Load average and Disk I/O (Specifically disk I/O of the disks where NiFi's content, flowfile, and provenance repositories are located). You could make sure your logs are being written a separate disk to elevate that Disk I/) form impacting NiFi's repos disks.
Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped.
Thank you,
Matt
Created 06-12-2024 05:43 AM
We have lots of loggers. Below is a few part from out logback.xml
<logger name="org.apache.nifi" level="INFO"/>
<logger name="org.apache.nifi.processors" level="WARN"/>
<logger name="org.apache.nifi.processors.standard.LogAttribute" level="INFO"/>
<logger name="org.apache.nifi.processors.standard.LogMessage" level="INFO"/>
<logger name="org.apache.nifi.controller.repository.StandardProcessSession" level="WARN" />
We already have separated pvc mounted for storing logs for our Nifi pods. Right now I can see total size of 8GB of logs generated a day. I'm compressing it so having a control on it.
Created 06-14-2024 07:46 AM
@Alexy
Without seeing your logs, I have no idea which NiFi classes are producing the majority of your logging. But logback is functioning exactly as you have it configured. Each time the nifi-app.log reaches 500 MB within a single day it is compressed and rolled using an incrementing number. I would suggest changing the log level for the base class "org.apache.nifi" from INFO to WARN. The bulk of all NiFi classes begin with org.apache.nifi and by changing this to WARN to you will only see ERROR and WARN level log output from the bulk of the ora.apache.nifi.<XYZ...> classes.
<logger name="org.apache.nifi" level="WARN"/>
Unless you have a lot of exception happening within your NiFi processor components used in your dataflow(s), this should have significant impact on the amount of nifi-app.log logging being produced.
Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped.
Thank you,
Matt