Member since
07-30-2019
3390
Posts
1617
Kudos Received
999
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 226 | 11-05-2025 11:01 AM | |
| 439 | 10-20-2025 06:29 AM | |
| 579 | 10-10-2025 08:03 AM | |
| 394 | 10-08-2025 10:52 AM | |
| 435 | 10-08-2025 10:36 AM |
09-27-2024
08:45 AM
1 Kudo
@sha257 The TLS properties need to be configured if your LDAP endpoint is secured meaning it requires LDAPS or START_TLS authentication strategies. Even when secured, you will alwasy need the TLS truststore, but may or may not need a TLS keystore (depends on your LDAP setup). For unsecured LDAP url access, the TLS properties are not necessary. Even unsecured (meaning connection is not encrypted), the manager DN and manager Password are still going to be required to connect to the ldap server. Based on information shared, I cannot say what your ldap setup does or does not require. You'll need to work with your ldap administrators to understand the requirements for connecting to your ldap. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-26-2024
05:22 AM
1 Kudo
@Vikas-Nifi Your dataflow is working as designed. You have your listFile producing three FlowFiles (1 for each file listed). Each of those FlowFiles the trigger the execution of your FetchFile which you have configured to fetch the content of only one those files. If you only want to fetch "test_1.txt", you need to either configure the listFile to only list file "test_1.txt" or you need to add a RouteOnAttribute processor between your listFile and FetchFile so that you are only routing the listed FlowFile with ${filename:equals{'test_1.txt')} to the FetchFile and auto-terminating the other listed files. The first option of only listing the file you want to fetch the content for is the better option unless there is more to your use case then you have shared. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-25-2024
08:35 AM
1 Kudo
@Twelve @aLiang The crypto.randomUUID() issue when running NiFi over HTTP or on localhost has been resolved via https://issues.apache.org/jira/browse/NIFI-13680. The fix will be part of next release after NiFi-2.0.0-M4. Thanks, Matt
... View more
09-24-2024
10:26 PM
1 Kudo
Record reader -JsonTreeReader record writer - JsonRecordSetWriter this i am using in my update record and for reference this is my flow looks like
... View more
09-24-2024
10:16 AM
1 Kudo
I too faced the same issue, I enabled stickyness on my Load balancer targetGroup and it worked!! Hompe thims hempls...
... View more
09-23-2024
01:16 AM
1 Kudo
Hi @MattWho Thanks for your response . I also don't understand the overhead of ingesting the same messages twice in your NiFi My requirement is to send data to different end point , so that they can perform different operations on the data. Why not have have a single ConsumeKafka ingesting the messages from the topic and then routing the success relationship from the ConsumeKafka twice (once to InvokeHTTP A and once to InvokeHTTP B)? For me one flow is like one vendor , like this i will be having multiple number of vendors , everyone will have there separate end points. Keeping all in one flow is not possible . So I am creating separate data flow and separate retry logic for them. This above issue is with only 1 vendor , they require same data (consumed from same kafka topic ) to be pushed to 2 separate endpoints . But I am not able to handle the retry logic for them. Why publish failed or retry FlowFile messages to an external topic R just so they can be consumed back in to your NiFi? yes i want them to be consumed again to nifi . All failed requests iam publishing to retry topic and this is being handled in retry flow . With this iam able to keep my main flow without and failed requests and new requests which does not have any error will get pushed to end point successfully It would be more efficient to just keep them in NiFi and create a retry loop on each InvokeHTTP. NiFi even offers retry handling directly on the relationships with in the processor configuration if i add a retry loop to invoke http and the endpoint is down for a longer time , too many requests will get queued in nifi . If you must write the message out to just one topic R, you'll need to append something to the message that indicates what InvokeHTTP (A or B) failure or retry resulted in it being written to Topic R. Then have a single Retry dataflow that consume from Topic R, extracts that A or B identifier from message so that it can be routed to the correct invokeHTTP. Just seems like a lot of unnecessary overhead. Please help me with the retry logic . Data is going in same retry topic how can i differentiate between the data , whether it failed from data flow 1 or from data flow 2.
... View more
09-20-2024
07:17 AM
1 Kudo
Hi @MattWho Thanks for the response, much appreciated - what I was looking at doing was simply moving the Registry from one of our servers which we had set up previously into another server which we were using for production - so not keeping 2 registries but instead using the new one and getting rid of the old one. What I didn't want to do was lose everything from the old one... However, this turned out to be WAY easier than I thought lol Basically we were deploying through Ansible and I had missed some configuration values and files when I was deploying which meant it was actually set to use files instead of a git repo for the storage system. Once I found this issue, updated it to the git repo and presto! all worked as expected with everything available in the new registry server from the git repository and all set up. Thanks for the information though that's also very helpful!
... View more
09-18-2024
05:44 AM
@abhinav_joshi You should have been able to right click on the "Ghost" processor and select "change version" option. This would have presented you with all the available versions in your NiFi installation. Simply select the one you want to use would resolve your issue. While this work great when you only have a few ghost processor created from your dataflow, it can be annoying to follow these steps for many components. The question here is why does you deployment of NiFi have multiple versions of the same NiFi nar installed. NiFi would not ship this way, so that means that additional nar(s) of different versions where added to your NiFi lib directory or to the NiFi extensions directory. You should remove these duplicate nars to avoid running into this issue again. When only one version exists, dataflow imported/loaded with older versions will auto switch to version used in the NiFi in which dataflow was loaded (this may mean and older or newer version of nar classes). Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-17-2024
08:37 AM
1 Kudo
@rizalt There is very little detail in your post. NiFi will run as whatever user is used to start it unless the "run.as" property is set in the NiFi bootstrap.conf file. If the user trying to execute the "./nifi.sh start" command is not the root user and you set the "run.as" property to "root", that user would need sudo permissions in linux to start NiFi as the root user. The "run.as" property is ignored on Windows where the service will always be owned by user that starts it. NOTE: Starting the service as a different user then it was previously started at will not trigger a change in file ownership in NiFi directories. You would need to update file ownership manually be starting as a different issue (this includes all NiFi's repositories). While "root" user has access to all files regardless of owner, issues will exist if no root user launches app and files are owned by another user including root. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-17-2024
08:10 AM
@Chetan_mn I loaded up the latest NiFi-2.0.0-M4 (milestone 4 release). Loaded up my flow definition used in my NiFi 1.23 version. All seems to work fine sending headers with mix case and seeing the correct attributes created with those mix case headers on FlowFile generated by HandleHTTPRequest processor. InvokeHTTP: You'll see two custom headers (displayName and outerID) added above as dynamic properties. HandleHTTPRequest processor: When I "List Queue" on the connection containing the "Success" relationship from HandleHTTPRequest processor and "view details" the queued FlowFile, the FlowFile attributes look correct. Are you saying you see different? Try using NiFi 2.0.0-M4 (latest) to see if experience is same. At what point in your dataflow are you checking the validating the FlowFile Attributes. Is your custom script maybe handling them wrong? I am not seeing an issue in the HandleHTTPRequest processor with regards to HTTP Header handling. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more