Member since
07-30-2019
3421
Posts
1628
Kudos Received
1010
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 114 | 01-13-2026 11:14 AM | |
| 228 | 01-09-2026 06:58 AM | |
| 524 | 12-17-2025 05:55 AM | |
| 585 | 12-15-2025 01:29 PM | |
| 565 | 12-15-2025 06:50 AM |
07-02-2024
07:33 AM
@Vikas-Nifi the following error is directly related to failure to establish certificate trust in the TLS exchange between NiFi's putSlack processor and your slack server: javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target " The putSlack processor utilizes the StandardRestrictedSSLContextService to define keystore and truststore files the putSlack processor will use. The truststore must contain the complete trustchain for the target slack server's serverAuth certificate. You can use: openssl s_client -connect <companyName.slack.com>:443 -showcerts to get an output of all public certs included with the serverAuth cert. I noticed with my slack endpoint that was not the complete trust chain (root CA certificate for ISRG Root X1 was missing from the chain). You can download the missing rootCA public cert directly from let's encrypt and add it to the truststore set in the StandardRestrictedSSLContextService. https://letsencrypt.org/certificates/ https://letsencrypt.org/certs/isrgrootx1.pem https://letsencrypt.org/certs/isrg-root-x2.pem You might also want to make sure all intermediate CAs are also added and not just the intermediate returned by the openssl command just in case server changes that you get directed to. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
07-02-2024
06:59 AM
@greenflag Not knowing anything about this rest-api endpoint, all I have are questions. How would you complete this task outside of NiFi? How would you accomplish this using curl from command line? What do the REST-API docs for your endpoint have in terms of how to get files? Do they expect you to pass the filename in the rest-api request? What is the rest-api endpoint that would return the list of files? My initial thought here (with making numerous assumptions about your endpoint) is that you would need multiple InvokeHTTP processors possibly. The first InvokeHTTP in the dataflow hits the rest-api endpoint that outputs the list of files in the endpoint directory which would end up in the content of the FlowFile. Then you split that FlowFile by its content so you have multiple FlowFiles (1 per each listed file). Then rename each FlowFile using the unique filename and finally pass each to another invokeHTTP processor that actually fetches that specific file. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
07-02-2024
06:16 AM
@NIFI-USER Are you seeing same behavior even when not using retry strategy of "yield"? What about when retry is not checked? FlowFiles, upon failure, should immediately be transferred to the connection containing the failure relationship. What are your penalty and yield settings set to on your PublishKafkaRecord_1_0? What version is your target Kafka (you are using a rather old Kafka client version 1.0)? As far as your Kafka topic goes, how many partitions on the topic? How many concurrent tasks set on PublishKafkaRecord? How many nodes in your NiFi cluster? Thanks, Matt
... View more
07-02-2024
06:05 AM
@Heiko Thanks for sharing. The choice between "USE_USERNAME" and "USE_DN" needs to be evaluated against the specific structure of the end user's LDAP/AD. With AD, the user commonly logs in with their sAMAccountName and very often the sAMAccountName value is not the same string used within the user's DN. While you would still be able to login using your sAMAccountName and password, the user identity passed to the authorizer would be the CN value form that full DN (Your regex assumes the CN consists of only upper or lower case letters and numbers which may not work for all DNs). Then with the switch to using the CN from the DN, you need to consider equivalent changes in the ldap-user-group-provider in authorizers.xml. You'll need to make sure whatever user identity strings come out of authentication through DN are properly mapped to group identities. Both solutions will work and both solutions need careful evaluation to setup. I typically find using USE_USERNAME more consistent in structure (LDAP and AD), and thus less impacted by corner case oddities that using USE_DN can introduce. Thanks again for your contributions to the community. There is often more then 1 way to solve most queries in Apache NiFi. Matt
... View more
07-01-2024
03:05 PM
1 Kudo
@NeheikeQ yes, newer version of 1.x NiFi-Registry will support older versions of NiFi version controlling to it. For NiFi after upgrade, load the flow.xml.gz on one node and start it. Then start the other nodes so that they all inherit the flow from the one node where you had a flow.xml.gz. At this point all nodes should join successfully and will have the same dataflow loaded. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
07-01-2024
02:55 PM
1 Kudo
@Dave0x1 Typically MergeContent processor will utilize a lot of heap when the number of FlowFiles being merged in a single execution is very high and/or the size of the FlowFile's attributes are very large. While FlowFiles queued in a connection will have the FlowFile attributes/metadata held in NiFi heap, there is a swap threshold at which time NiFi swaps FlowFile attributes to disk. When it comes to MergeContent, FlowFile are allocated to bins (will still show in inbound connection count). FlowFiles allocated to bin(s) can not be swapped. So if you set min/max num flowfiles or min/max size to a large value, it would result in large amounts of heap usage. Note: FlowFile content is not held in heap by mergeContent. So the way to create very large merged files while keeping heap usage lower is by chaining multiple mergeContent processor together in series. So you merge a batch of FlowFiles in first MergeContent and then merge those into larger merged FlowFile in a second MergeContent. Also be mindful of extracting content to FlowFile attributes or generating FlowFile attributes with large values to help minimize heap usage. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
07-01-2024
02:44 PM
1 Kudo
@Trilok The older flow.xml.gz format was deprecated as of Apache NiFi 1.16 in favor of the newer flow.json.gz format. NiFi 1.16+ will only load the flow.xml.gz if the flow.json.gz does not already exist during startup. Upon successful startup, NiFi will generate the flow.json.gz. The NiFi 1.16+ version will still generate both the flow.xml.gz and flow.json.gz formats with every change made on the UI. With the major release of Apache NiFi 2.x, the deprecated flow.xml.gz format was removed. There is no option in NiFi 2.0 to support the older flow.xml.gz format. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
07-01-2024
02:24 PM
1 Kudo
@enam The FetchSFTP processor only supports specifying a target directory path in which the source file on the SFTP server will be moved. It does not support the renaming of the source file during that move. When you configure the filename in the "move destination directory" it creates that filename as a directory in which it puts the source file using the value from the FlowFiles "filename" attribute. Your option is to set "completion strategy to "Delete File" and then route the "success" relationship via two separate outbound connections. One connection will continue down your existing dataflow path. The second connection will feed an UpdateAttribute processor (used to change filename) and then a putSFTP processor to write it back the SFTP server in the new directory. The NiFi FlowFile routed to "success" relationship in connection 1 will retain original filename. The NiFi FlowFile routed to "success" relationship in connection 2 will have its original filename modified by the UpdateAttribute processor as follows before being written back to SFTP server in new folder via the putSFTP processor. Update Attribute configuration (click "plus" icon to add new dynamic property): Property = filename Value = ${filename:substringBeforeLast('.')}-${UUID}.${filename:substringAfterLast('.')} You can then auto-terminate the success relationship of the putSFTP processor. terminating FlowFile down path of connection 2 will have no impact on FlowFile routed to connection 1. Make sure that you set the target path in the putSFTP processor (path does not include filename). Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
07-01-2024
01:47 PM
1 Kudo
@NIFI-USER I am trying to fully understand your use case. So you have a primary Kafka and a backup Kafka for failover? This feels like an odd setup to me. So you are trying to plan for a use case where the entire primary Kafka cluster is down and NiFi will failover to sending publishing to another entirely different Kafka cluster? Option 1: I noticed that when you tried to use "retry" on the "failure" relationship, you configured it to use "Yield". When a FlowFile fails, it is then in this case yields the processor execution for 1 sec (default yield duration from settings tab) and the FlowFile remains in priority slot 1. So after yield duration that same FlowFile is attempted again. If it fails again a yield of 1 minute and 1 sec is applied to processor before FlowFile is processed again, and then on next retry that gets doubled to 2 mins and 2 secs, then doubled to 4 mins 4 secs, etc. With the yield retry policy, it prevents processing of the next FlowFile in the inbound connection until the first has failed the configured number of retries. This can really slow the process of moving FlowFiles to the connection containing the "failure" relationship. Option 2: Instead you could try the "penalize" retry policy. This policy applies a penalty time to the FlowFile when it fails and leaves it in inbound connection queue. It does not yield the processor. So the processor continues to get scheduled to execute. Any "penalized" FlowFiles are ignored until penalty duration expires. The default penalty duration is 30 secs configured on the settings tab (you may want to reduce this value for your use case.). It also doubles the length of penalization with each subsequent retry. After the configured number of retries the FlowFiles is moved to the outbound connection containing the failure relationship. Both the above strategies allow for unexpected failures to still have an opportunity to succeed to your primary Kafka with some delay. Option 3: The alternative is to stop using the "retry" capability on the failure relationship. This means that every FlowFile that fails will immediately fail, get penalized and transferred to the "failure" relationship. So in scenario where this is only a temp unexpected failure you'll have some FlowFiles going to your backup Kafka cluster. and other still going to yoru primary Kafka cluster. FlowFiles routed to failure are automatically penalized also. Since you are not looping the failure relationship's connection back on the source processor, you may also want to set penalty duration to 0 sec in the settings tab so the that downstream secondary Kafka cluster processor will be able to execute on the FlowFile immediately instead of needing to wait for that penalty to expire. No matter which option above you choose to use, it is important to adjust the penalty and or yield duration settings to meet your use case needs. Resources: Relationships Tab Settings Tab Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
07-01-2024
10:00 AM
@Lorenzo There is not enough information provided to provide a good response here. 1. Providing the full stack trace output from the nifi-app.log may provide details that are helpful here (challenging considering the NPE nature of the exception) 2. Are you sure the SMTP server supports username and password authentication and not OAuth2 based authentication instead? PutEmail Additional Details Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more