Member since
07-30-2019
3406
Posts
1622
Kudos Received
1008
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 188 | 12-17-2025 05:55 AM | |
| 249 | 12-15-2025 01:29 PM | |
| 183 | 12-15-2025 06:50 AM | |
| 278 | 12-05-2025 08:25 AM | |
| 465 | 12-03-2025 10:21 AM |
05-03-2021
06:09 AM
1 Kudo
@hkh The appender you shared is not valid. You have configured your appender rolling policy to use: SizeAndTimeBasedRollingPolicy However, your file naming patten only supports time based pattern ${org.apache.nifi.bootstrap.config.log.dir}/nifi-app_%d.log This leaves you will two options: Option1: - Keep using the "SizeAndTimeBasedRollingPolicy", but change your file naming pattern - A pattern like "${org.apache.nifi.bootstrap.config.log.dir}/nifi-app_%d{yyyy-MM-dd}.%i.log" The "{yyyy-MM-dd}" is option but allows you to specify the date format. - With above pattern logback will retain 1 day of log history, but may or may not have more then 1 log in a given day depending on volume of logging that is occurring. If a daily log reaches the configured "maxFileSize" the log with roll. This allows you to keep you logs at manageable sizes. When the log rolls it will get a one up number applied per this new file naming pattern. For example: nifi.app_2021-04-28.1.log nifi.app_2021-04-28.2.log - While this can still result in an unbounded number fo incremental logs files created in a single day, you can control overall disk usage by adding another property within the "rollingPolicy" section that will start purging incremental rolled logs if max amount of space consumed by these rolled logs exceeds this set value. Add this line below your "<maxHistory>1</maxHistory>" line: <totalSizeCap>3GB</totalSizeCap> Note: Will only remove rolled/archive logs and will not remove active log. Option 2: - Change the rolling policy you are using to "ch.qos.logback.core.rolling.TimeBasedRollingPolicy" - With this policy you can keep the file name pattern you already have "${org.apache.nifi.bootstrap.config.log.dir}/nifi-app_%d.log" - You will need to comment out remove the following line: <maxFileSize>1GB</maxFileSize> it is only valid for size based rolling policies. - The downside of this setup is that a single daily log file can grow unbounded. As far as your other question goes.... The logback.xml is the only configuration file in NiFi that you can edit and its changes will take affect without needing a NiFi restart. Some caveats... NiFi did not write logback and it certainly has its quirks. For example... If you enter a bad configuration, it may simply just stop logging. The way maxHistory works in logback prevents the cleanup from looking at older rolled logs outside the maxHistory window, so you will need to cleanup the older rolled logs manually initially and not expect logback to do that clean-up for you after editing the logback.xml. Since you are making big changes to file naming pattern and potentially the rolling policy, i'd encourage you to restart NiFi anyway so it cleanly starts writing to the new log format on startup. Hope you found this helpful. If so take a moment to login and click accept on this solution, Matt
... View more
04-26-2021
05:51 AM
@cgmckeever It would be helpful if you shared the specific on your NiFi version you are testing with and the environment on which you are testing it. Thanks, Matt
... View more
04-26-2021
05:45 AM
@naga_satish If you NiFi is running secured (HTTPS), then every action must be authenticated and authorized and this includes calls made to the NiFi rest API. NiFi by default will always support a client/user certificate for authentication as the first attempted method of authentication. Using a valid client/user certificate is the most common method used when interacting with NiFi via the rest-api because it does not require the client to acquire a bearer token like other methods such as the ldap-provider. Also Bearer tokens are only valid on the node from which it was acquired (Can't use token issued by node 1 on node 2, 3, 4, etc...). The best way to to see how these commands are executed is using the developer tools in your browser while executing the actions via the NiFI Ui directly. You can even copy the curl command from the developer tools. For example, in Chrome browser click on the settings --> more tools --> Developer tools From UI that opens click the "Network" tab. Now you can see the calls being made as you perform them via the UI and can right click on any call and select copy as curl. Now if you still want to use Tokens, here is one example based off using a login provider: To get a token for a user: curl 'https://<nifi-hostname>:<nifi-port>/nifi-api/access/token' --data-raw 'username=<username>&password=<password>' --compressed --insecure The return from the above is your bearer token for the user. This bearer token is only valid for the duration of the configured expiration in the login-identity-providers.xml file. The following command then can be used to fetch the current state on a processor (<TOKEN> is the string returned from above): curl 'https://<nifi-hostname>:<nifi-port>/nifi-api/processors/<processor-UUID>/state' -H 'Authorization: Bearer <TOKEN>' --compressed --insecure The following command then can be used to clear the state on a processor: curl 'https://<nifi-hostname>:<nifi-port>/nifi-api/processors/<processor-UUID>/state/clear-requests' -X 'POST' -H 'Authorization: Bearer <TOKEN>' --compressed --insecure Since you setup may be totally different in how it is setup to authenticate your users, it is best to use the browser's developer tools to to see the rest-api action in progress to understand the interactions. Hope this helps, Matt
... View more
04-16-2021
06:06 AM
@Vickey The file filter property of the unpackContent processor takes a java regular expression and can be used when unpacking tar or zip file. In your unpackContent processor, set the "Packaging format" to either "ZIP" or "TAR" based on what package format is used by your source file. The set a java regular expression such as below to extract only files within that package where the filename ends with the .csv, .txt, or .xml extensions: .*\.(txt|xml|csv) Hope this helps, Matt
... View more
04-13-2021
10:50 AM
@Law While Jolt transforms are not something NiFi specific and not something I am strong with myself, you may find these links helpful to you: https://intercom.help/godigibee/en/articles/3096940-simple-if-else-with-jolt https://community.cloudera.com/t5/Community-Articles/Jolt-quick-reference-for-Nifi-Jolt-Processors/ta-p/244350 Hope this helps, Matt
... View more
04-12-2021
06:35 AM
1 Kudo
@ram_g With all 100 FlowFiles committed to the success relationship of your custom processor at the same time, how do we want NiFi to determine their priority order? If you can out put some attributes on each FlowFile that your custom processor is creating, those attribute values could be used set processing order downstream. Hope this helps, Matt
... View more
04-12-2021
06:16 AM
1 Kudo
@john I have a HDF 3.4.1.1 cluster (Based off NiFi 1.11.4) setup and with PGs version controlled and can change processors from started to stopped to disabled without it triggering a local change. However, HDF 3.4.1.1 ships with NiFi-Registry 0.3 and not 0.8. I have another HDF 3.5.2 cluster (based off NiFi 1.12.1) and ships with NiFi-Registry 0.8. In that cluster, I can also change a processor from start to stop to disabled and it does trigger a local change. I see someone filled a Jira about this change in behavior: https://issues.apache.org/jira/browse/NIFI-8160 The tracking of Enabled and Disabled State in NiFi-Registry was added as part of: https://issues.apache.org/jira/browse/NIFI-6025 Hope this helps, Matt
... View more
04-12-2021
05:49 AM
@AnkushKoul Since the 30 seconds since last execution has past, the processor is available to be immediately scheduled once a thread becomes available. So second thread would not wait till 60 seconds. This setting is minimum wait between executions. Other factors come int play that can affect component execution scheduling. NiFi hands out threads to processors from the Max Timer Driven Thread Count resource pool set via Controller Settings under the global menu in the upper right corner. Naturally you will have more components on your canvas then the size of this resource pool (which should be set initially to only 2-4 times the number fo cores you have on a single node since setting applies per node). NiFi will hand these available threads out to processors requesting CPU time to execute. Most component threads are in the range of milliseconds of execution, bit some can be more resource intensive and take longer to complete. Before increasing this resource pool, you should monitor the CPU impact/usage with all your dataflows running. Then make small increments if resource exist. Hope this answers your questions. If, so please take. moment to accept the answer(s) that helped. Matt
... View more
04-12-2021
05:35 AM
@Masi The exception does not appear related to Load Balance connections in NiFi. LB Connections utilize NiFi S2S in the background which does not use MySQL. Matt
... View more
04-12-2021
05:28 AM
1 Kudo
@Jarinek With the little details provided, it sounds like this exception is related to storing the peers details returned by a Remote Process Group (RPG) fetching Site-To-Site (S2S) details from a target NiFi cluster. Did you run out of disk space on any of your local disk or on the disk of the target NiFi cluster of your RPG? If so, did you free up space and restart your NiFi to see if the repository could checkpoint and correct the issue? Hope this helps< Matt
... View more