Member since
07-30-2019
3411
Posts
1623
Kudos Received
1008
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 390 | 12-17-2025 05:55 AM | |
| 451 | 12-15-2025 01:29 PM | |
| 466 | 12-15-2025 06:50 AM | |
| 384 | 12-05-2025 08:25 AM | |
| 628 | 12-03-2025 10:21 AM |
09-09-2024
01:43 PM
1 Kudo
@Rohit1997jio You have topic A with your source messages. You have two consumer groups each pulling all the messages from Topic A. While both these dataflows consume all the same messages, each may fail on a different messages with the InvokeHTTP execution. You are then writing the FlowFiles that failed invokeHTTP to another topic R which both consumer groups can consume from. So both consumer groups will get a copy of any message written to the topic. Your dataflow is working exactly as designed. You must keep your retry logic independent of one another. I also don't understand the overhead of ingesting the same messages twice in your NiFi. Why not have have a single ConsumeKafka ingesting the messages from the topic and then routing the success relationship from the ConsumeKafka twice (once to InvokeHTTP A and once to InvokeHTTP B)? Why publish failed or retry FlowFile messages to an external topic R just so they can be consumed back in to your NiFi? It would be more efficient to just keep them in NiFi and create a retry loop on each InvokeHTTP. NiFi even offers retry handling directly on the relationships with in the processor configuration. If you must write the message out to just one topic R, you'll need to append something to the message that indicates what InvokeHTTP (A or B) failure or retry resulted in it being written to Topic R. Then have a single Retry dataflow that consume from Topic R, extracts that A or B identifier from message so that it can be routed to the correct invokeHTTP. Just seems like a lot of unnecessary overhead. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-09-2024
01:22 PM
1 Kudo
@Chetan_mn Details are very beneficial in getting assistance in the community. Why different NiFi instance would produce two different FlowFile attribute contents, makes no sense to me. 1. Are these two instance of NiFi just 2 nodes in the same NiFi cluster? 2. What version(s) of NiFi are being used? 3. What processor is being used to your http requests? I am assuming the HandleHttpRequest processor. 4. How is the HTTP processor configured? 5. Can you share a sample http request? 6. What is the source generating the request? Is it an automated process? Thank you, Matt
... View more
09-09-2024
01:07 PM
@AlokKumar The more detail you provide the better response you will get. In this case, more detail around the content/format of the source file (samples are great) and how it is being or will be processed downstream. A NiFi FlowFile is what is passed from one processor component to the next. There is no direct relationship between what processing was done in one processor versus another. Each Processor will read the FlowFile (consists of FlowFile Attributes/metadata and FlowFile content) and execute its processor code against the FlowFile and output a FlowFile(s) to one or more outbound relationships. So processing against the content of A FlowFile becomes the responsibility of each processor separately. So the questions asked correlate to the answer you'll get. With what you shared, I can only suggest that you remove the first line from the content of the FlowFile so that downstream processor will only have line 2 to process against. This can be accomplished easily with a ReplaceText processor. Simply configure it for "Evaluation Mode" = "line-by-line", "line-by-line Evaluation Mode" = "First-Line", and in "Replacement Value" property "Set empty string" check box. The output FlowFile' content, which is sent to the success relationship, will have everything except its first line. Do you need to preserve line 1? If so, maybe use ExtractText to extract line 1 to a FlowFile attribute you can then use after processing to add back to Content again using ReplaceText. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-09-2024
09:34 AM
1 Kudo
@AlokKumar The idea is that after renaming your FlowFile, you use the putFile processor to write that file to your archive directories on each of the NiFi instances. That ends that dataflow. You then start a new dataflow that consists of only the GetFile processor with its success relationship terminated and the minimum File Age property set to "30 days". Start this processor and it will continuously check the target archive directory for any files where the last modified timestamp has exceeded 30 days. Those files older then 30 days will be consumed and removed from archive assuming property "Keep Source File" is set to "false". Since the success "relationship" is set to auto-terminate, The FlowFile produced is terminated. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-09-2024
09:18 AM
@P2as Zookeeper needs to have more then 1 node in order to have quorum. A single ZK instance will not work. The next question is how is client/user authentication and authorization setup? While i dod not see any untrusted proxy exception or SSL exceptions in what you shared, I wonder if you are encountering a mutualTLS issue between your two nodes resulting in your connection exception. When you try to access a NiFi node's URL, your request is replicated by the elected cluster coordinator to all nodes in the cluster and those nodes need to respond with what access the authenticated user is authorized for on each node. It is this replication request that is failing. You may need to dig a bit deeper in to your logs and configurations making sure that the NiFi instances successfully bound to those ports and those ports are not being blocked. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-09-2024
08:54 AM
1 Kudo
@jcsilva @gourexe The red flag that stands out here is that you stated you were using NiFi 1.16.2 and trying to upgrade to Nifi 1.16.3; however, you then state that the problematic component is version 1.18.0. This implies that you or someone else added additional nar files from NiFi 1.18.0 to your NiFi 1.16.2 install lib directory. Those newer version components may have introduced new properties that do not exist in the NiFi 1.16.x versions. When you upgraded to NiFi 1.16.3, those 1.18.0 nar versions no longer exist, so NiFi attempts to load the same class from 1.16.3 thus resulting in your issue. You should instead upgrade to 1.18.0 or newer version or you'll need to add in the additional nars once added before from 1.18. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-09-2024
08:43 AM
1 Kudo
@NaveedButt The exception you shared is complaining about the nifi.sensitive.props.key property not being set in the nifi.properties file at startup: This pops key value is used to encrypt all sensitive component properties created in the flow.json.gz as you build your dataflows via the NiFi canvas. If you have not built any dataflows that utilize sensitive properties in the component configurations, you can just set some value and start your NiFi. If you have already existing dataflows on your NiFi canvas containing sensitive values set (passwords) in the configurtion(s), you'll need to retrieve the exact password from the original nifi.properties and use that. If original props key is not set, NiFi will fail to start when it tries to load the flow.json.gz since it will not be able to decrypt those passwords using the new props key. If you get stuck, you can carefully manually edit the flow.json.gz and remove all "enc{.....}" values. This will allow you to start your NiFi using a new sensitive props key value; however, all your configured passwords will be cleared and you will need to enter them again into your various component configurations. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-09-2024
06:30 AM
@yagoaparecidoti There are still multiple ways to move complete dataflows from one NiFi to Another NiFi. When you start a NiFi for the first time, NiFi create the root Process Group (PG). This is the blank canvas that you see when you log in. I would suggest creating a Child PG at the root PG level and build all you dataflows within the new child PG. Method 1: Use NiFi-Registry: You can setup a Single NiFi-Registry that all your NiFi deployments can use. Start Version control on your child PG. This will write the PG flow definition to that NiFi-Registry. Give your other NiFi instance the ability to read the NiFi-Registry Bucket and flow in which that flow definition is stored. On your other NiFi instances import that version controlled PG from the shared NiFi-Registry. Anytime a newer version of that Flow definition is pushed to NiFi-Registry, all your other NiFi instances using it will show a newer version as being available. Method 2: You can utilize the rest-api to download a flow definition of that child PG. You can utilize the rest-api on other instances of NiFi to instantiate that downloaded flow definition onto the root PG of those instances. As far as the NiFi REST-API goes, it is always easier to utilize the "developer tools" available within your browser to capture the rest-api calls as they are made when you manually perform the steps directly via the NiFi UI. This allows you to copy the rest-api call as a curl command. From that captured curl command you can see the rest-api endpoint used to perform the request along with format of any data or variables needed in the request. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-09-2024
06:08 AM
1 Kudo
@AlokKumar You have multiple questions here, let me address them separately: Renaming your content file: A NiFi FlowFile holds its filename in a FlowFile Attribute named "filename". You can use the UpdateAttribute processor to change/modify a FlowFiles filename through the use of the NiFi Expression Language (NEL) Add a dynamic new property to the UpdateAttribute processor using the "+"icon in upper right corner of [processors configure window. "Property" = FlowFile attribute to be modified (filename). Value = A FlowFile NEL statement that manipulates the existing filename as desired. Example: ${filename:substringBeforeLast('.')}_${now():format('MM_dd_yyyy_hh_mm')}.${filename:substringAfterLast('.')} With above example, I passed my UpdateAttribute a FlowFile with a "filename" attribute of "testfile.txt". The above sample NEL statement modified the filename attribute on the FlowFile to "testfile_09_09_2024_12_57.txt". Archive a FlowFile: You were not clear on how you want to handle your archived files. Do you have some external service that monitors and cleans out your archive folder? NiFi has a PutFile processor that can write FlowFiles to some local directory on your local NiFi instance or other processors that may be able to write to an external service. If you are looking for NiFi to monitor that directory and purge archived files after 30 days, you would need to build another NiFi dataflow to handle that. For example: GetFile (configured with minimum File age of 30 day) which you then auto-terminate its "success" relationship. This will consume files older then 30 days and purge them from the configured directory. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-09-2024
05:37 AM
@Leo3103 It appeasr you are handling the conversion of your NiFi flow Definition for use in your MiNiFi incorrectly as per doscumetation: https://github.com/apache/nifi/blob/main/minifi/minifi-docs/src/main/markdown/minifi-java-agent-quick-start.md You should be downloading your flow definition (json file) via the NiFi UI. Then you should be renaming that file "flow.json.raw" (no mention of compression here) and place it in the MiNiFi conf directory. Once you have your flow.json.raw file in the minifi/conf directory, launch that instance of MiNiFi and your dataflow begins automatically. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more