Member since
07-30-2019
3427
Posts
1632
Kudos Received
1011
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 31 | 01-27-2026 12:46 PM | |
| 469 | 01-13-2026 11:14 AM | |
| 928 | 01-09-2026 06:58 AM | |
| 875 | 12-17-2025 05:55 AM | |
| 936 | 12-15-2025 01:29 PM |
10-01-2024
12:24 PM
@imvn Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
09-27-2024
10:22 AM
1 Kudo
@Shalexey Your query does not contain a lot of detail around your use case, but i'll try to provide some pointers here. NiFi processor components have one or more defined relationships. These relationships are where the NiFi FlowFile is routed when a processor completes its execution. When you assign a processor relationship to more then 1 outbound connection, NiFi will clone the FlowFile how ever many times the relationship usage is duplicated. So looking at dataflow design shared, I see you have what appears to be "success" relationship route twice out of the UpdateAttribute processor (this means the original FlowFile is sent to one of these connections and a new cloned FlowFile is sent to the other connection). So you can't simply route both these FlowFiles back to your QueryRecord processor as each would be executed against independently. If I am understanding your use case correctly you ingest a csv file that needs to be updated with an additional new column (primary key). The value that will go into that new column is fetched from another DB via the ExecuteSQLRecord processor. Problem is that the ExecuteSQLRecord processor would overwrite your csv content. So what you need to build is a flow that get the enhancement data (primary key) and adds it to the original csv before the putDataBaseRecord processor. Others might have different solution suggestions, but here is one option that comes to mind: GetFile --> Gets the original CSV file UpdateAttribute --> sets a correlation ID (corrID = ${UUID()}) so that when FlowFile is cloned later both can be correlated to one another with this correlation ID that will be same on both. ExecuteSQL --> query max key DB QueryRecord --> trim output to just needed max key ExtractText --> Extract the max key value from the content to a FlowFile Attribute (maxKey). ModifyBytes --> set Remove all Content to true to clear content from this FlowFile (does not affect FlowFile attributes. MergeContent - min num entries = 2, Correlation Attribute name = corrID, Attribute strategy = Keep All Unique Attributes. (This will merge both FlowFiles original and clone with same value in FlowFile attribute "corrID" into one FlowFile containing only the csv content) UpdateRecord --> Used to insert the max key value from the max key FlowFile attribute into the original CSV content. (Record reader can infer schema; however, record writer will need to have a defined schema that includes the new "primary key" column. Then you will be able to add a dynamic property to insert maxkey flowfile attribute into the "primaryKey" csv column. PutDatabaseRecord --> write modified csv to destination DB. Even if this does not match up directly, maybe you will be able to apply the NiFi dataflow design concept above to solution your specific detailed use case. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-27-2024
08:45 AM
1 Kudo
@sha257 The TLS properties need to be configured if your LDAP endpoint is secured meaning it requires LDAPS or START_TLS authentication strategies. Even when secured, you will alwasy need the TLS truststore, but may or may not need a TLS keystore (depends on your LDAP setup). For unsecured LDAP url access, the TLS properties are not necessary. Even unsecured (meaning connection is not encrypted), the manager DN and manager Password are still going to be required to connect to the ldap server. Based on information shared, I cannot say what your ldap setup does or does not require. You'll need to work with your ldap administrators to understand the requirements for connecting to your ldap. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-26-2024
05:22 AM
1 Kudo
@Vikas-Nifi Your dataflow is working as designed. You have your listFile producing three FlowFiles (1 for each file listed). Each of those FlowFiles the trigger the execution of your FetchFile which you have configured to fetch the content of only one those files. If you only want to fetch "test_1.txt", you need to either configure the listFile to only list file "test_1.txt" or you need to add a RouteOnAttribute processor between your listFile and FetchFile so that you are only routing the listed FlowFile with ${filename:equals{'test_1.txt')} to the FetchFile and auto-terminating the other listed files. The first option of only listing the file you want to fetch the content for is the better option unless there is more to your use case then you have shared. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-25-2024
08:35 AM
1 Kudo
@Twelve @aLiang The crypto.randomUUID() issue when running NiFi over HTTP or on localhost has been resolved via https://issues.apache.org/jira/browse/NIFI-13680. The fix will be part of next release after NiFi-2.0.0-M4. Thanks, Matt
... View more
09-24-2024
10:26 PM
1 Kudo
Record reader -JsonTreeReader record writer - JsonRecordSetWriter this i am using in my update record and for reference this is my flow looks like
... View more
09-24-2024
10:16 AM
1 Kudo
I too faced the same issue, I enabled stickyness on my Load balancer targetGroup and it worked!! Hompe thims hempls...
... View more
09-23-2024
01:16 AM
1 Kudo
Hi @MattWho Thanks for your response . I also don't understand the overhead of ingesting the same messages twice in your NiFi My requirement is to send data to different end point , so that they can perform different operations on the data. Why not have have a single ConsumeKafka ingesting the messages from the topic and then routing the success relationship from the ConsumeKafka twice (once to InvokeHTTP A and once to InvokeHTTP B)? For me one flow is like one vendor , like this i will be having multiple number of vendors , everyone will have there separate end points. Keeping all in one flow is not possible . So I am creating separate data flow and separate retry logic for them. This above issue is with only 1 vendor , they require same data (consumed from same kafka topic ) to be pushed to 2 separate endpoints . But I am not able to handle the retry logic for them. Why publish failed or retry FlowFile messages to an external topic R just so they can be consumed back in to your NiFi? yes i want them to be consumed again to nifi . All failed requests iam publishing to retry topic and this is being handled in retry flow . With this iam able to keep my main flow without and failed requests and new requests which does not have any error will get pushed to end point successfully It would be more efficient to just keep them in NiFi and create a retry loop on each InvokeHTTP. NiFi even offers retry handling directly on the relationships with in the processor configuration if i add a retry loop to invoke http and the endpoint is down for a longer time , too many requests will get queued in nifi . If you must write the message out to just one topic R, you'll need to append something to the message that indicates what InvokeHTTP (A or B) failure or retry resulted in it being written to Topic R. Then have a single Retry dataflow that consume from Topic R, extracts that A or B identifier from message so that it can be routed to the correct invokeHTTP. Just seems like a lot of unnecessary overhead. Please help me with the retry logic . Data is going in same retry topic how can i differentiate between the data , whether it failed from data flow 1 or from data flow 2.
... View more
09-20-2024
07:17 AM
1 Kudo
Hi @MattWho Thanks for the response, much appreciated - what I was looking at doing was simply moving the Registry from one of our servers which we had set up previously into another server which we were using for production - so not keeping 2 registries but instead using the new one and getting rid of the old one. What I didn't want to do was lose everything from the old one... However, this turned out to be WAY easier than I thought lol Basically we were deploying through Ansible and I had missed some configuration values and files when I was deploying which meant it was actually set to use files instead of a git repo for the storage system. Once I found this issue, updated it to the git repo and presto! all worked as expected with everything available in the new registry server from the git repository and all set up. Thanks for the information though that's also very helpful!
... View more
09-18-2024
05:44 AM
@abhinav_joshi You should have been able to right click on the "Ghost" processor and select "change version" option. This would have presented you with all the available versions in your NiFi installation. Simply select the one you want to use would resolve your issue. While this work great when you only have a few ghost processor created from your dataflow, it can be annoying to follow these steps for many components. The question here is why does you deployment of NiFi have multiple versions of the same NiFi nar installed. NiFi would not ship this way, so that means that additional nar(s) of different versions where added to your NiFi lib directory or to the NiFi extensions directory. You should remove these duplicate nars to avoid running into this issue again. When only one version exists, dataflow imported/loaded with older versions will auto switch to version used in the NiFi in which dataflow was loaded (this may mean and older or newer version of nar classes). Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more