Member since
07-30-2019
3411
Posts
1623
Kudos Received
1008
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 390 | 12-17-2025 05:55 AM | |
| 451 | 12-15-2025 01:29 PM | |
| 466 | 12-15-2025 06:50 AM | |
| 384 | 12-05-2025 08:25 AM | |
| 628 | 12-03-2025 10:21 AM |
09-24-2024
01:54 PM
1 Kudo
@Ashi Potential option: What record Reader and record writers are you using in your UpdateRecord processor? What schema are you using for your records? In order to add a new field, that new field needs to be defined in the records schema. In your case the schema must contain the field "devicename". Prior to UpdateRecord, you could use perhaps an ExtractText processor to extract the "rc01;rik2jc" value from the meHostName field to a flowfile.attribute. Then will you be able to use UdpateRecord to apply a value to that new record field in the record writer. Property: /devicename value: ${flowfile.attribute:substringAfter(';')} Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-18-2024
03:27 PM
1 Kudo
@Crags You can not have both your NiFi-Registries linked to the same Git repository. NiFi-Registry only pushes to the git repository. The only time NiFi-Registry would ever read from the Git Repository is on startup. So if you used two NiFi-Registries and and were committing changes by both, you can cause issues with what is getting committed to your Git repo. What is more common is to have a single NiFi-Registry which is utilized by multiple NiFi deployments. QA NiFi builds some flow and when that flow is ready for production, it is committed to the NiFi-Registry. That flow can the be imported from that single NiFi-Registry to the canvas of your PROD NiFi. Now both NiFi instances are tracking to same flow in same registry. You then start making local changes to that same version controlled Process Group (PG) in your QA NiFi. The PG will indicate you have local changes. you then have a couple choices on how you want to use your shared NiFi-Registry: Wait until you have completed making all your changes and testing in QA before committing the next version to the shared registry. At which time your Prod NiFi PG will indicate a newer version is now available in the shared NiFi-Registry. You can then update your prod to that new version. Incrementally commit updated versions of the PG to the shared registry. Your prod will show new version available, so you will want to create a process for what versions are prod ready to control when a new version is actually changed in your prod. About the UUID linkage... Your NiFi can have one or more defined registry clients and each of those defined registry clients gets an assigned UUID on the NiFi instance (will not be same UUID on every NiFi that sets up same registry client). NiFi stores everything on the canvas locally in the flow.json.gz file so it can be reloaded into NiFi heap on startup. When you start version control on a PG, the flow (gets uuid) is added to a NiFi-Registry bucket (has UUID). Locally on the NiFi within the flow.json.gz there is now a reference to a specific NiFi-Registry client (by its UUD), a specific bucket (by its UUID) and specific flow (by its UUID). Now considering scenario of a shared NiFi-Registry, the registry client on that NiFi will hav a different uuid even though it connects to same shared NiFi-Registry. So using the registry client, you import a flow that flow from NiFi-Registry to the NiFi canvas. Every component created from the import flow will get assigned UUIDs (will not match UUIDs assigned on other NiFi). Those differences in UUIDs are not tracked as changes. This is why if you stop version control, you can't start version control again and connect it back to an existing flow stored in NiFi-registry. You also can't delete the registry client and re-create it as it too would get a different UUID (NiFi blocks removing a registry client if any PG are currently using it for version control for this reason). --------- Another option is to have a separate NiFi-Registry for each environment. When you are ready to move a flow from NiFi-Registry 1 (QA) to NiFi-Registry 2 (Prod), go into your QA NiFi-Registry, locate the flow and from the "actions" menu select export version, and select the version you want to export. You can then go to your prod NiFi-Registry and "import new flow". Once imported you can go to yoru Prod NiFi and load that flow onto the canvas. Later when you are ready in QA with a new version to push to prod, you can again export the prod ready version. On prod NiFi-Registry, you can select the existing flow and from "actions" menu select "import new version". This will allow you to add this flow as next version in Prod. After doing so the version controlled PG(s) on your prod NiFi tracking against that flow will report a new version is available. This second option allows your have better control over what changes make it to your Prod deployment. You could also script rest-api calls to automate these steps if you wanted. ------ Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-18-2024
05:44 AM
@abhinav_joshi You should have been able to right click on the "Ghost" processor and select "change version" option. This would have presented you with all the available versions in your NiFi installation. Simply select the one you want to use would resolve your issue. While this work great when you only have a few ghost processor created from your dataflow, it can be annoying to follow these steps for many components. The question here is why does you deployment of NiFi have multiple versions of the same NiFi nar installed. NiFi would not ship this way, so that means that additional nar(s) of different versions where added to your NiFi lib directory or to the NiFi extensions directory. You should remove these duplicate nars to avoid running into this issue again. When only one version exists, dataflow imported/loaded with older versions will auto switch to version used in the NiFi in which dataflow was loaded (this may mean and older or newer version of nar classes). Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-17-2024
08:37 AM
1 Kudo
@rizalt There is very little detail in your post. NiFi will run as whatever user is used to start it unless the "run.as" property is set in the NiFi bootstrap.conf file. If the user trying to execute the "./nifi.sh start" command is not the root user and you set the "run.as" property to "root", that user would need sudo permissions in linux to start NiFi as the root user. The "run.as" property is ignored on Windows where the service will always be owned by user that starts it. NOTE: Starting the service as a different user then it was previously started at will not trigger a change in file ownership in NiFi directories. You would need to update file ownership manually be starting as a different issue (this includes all NiFi's repositories). While "root" user has access to all files regardless of owner, issues will exist if no root user launches app and files are owned by another user including root. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-17-2024
08:10 AM
@Chetan_mn I loaded up the latest NiFi-2.0.0-M4 (milestone 4 release). Loaded up my flow definition used in my NiFi 1.23 version. All seems to work fine sending headers with mix case and seeing the correct attributes created with those mix case headers on FlowFile generated by HandleHTTPRequest processor. InvokeHTTP: You'll see two custom headers (displayName and outerID) added above as dynamic properties. HandleHTTPRequest processor: When I "List Queue" on the connection containing the "Success" relationship from HandleHTTPRequest processor and "view details" the queued FlowFile, the FlowFile attributes look correct. Are you saying you see different? Try using NiFi 2.0.0-M4 (latest) to see if experience is same. At what point in your dataflow are you checking the validating the FlowFile Attributes. Is your custom script maybe handling them wrong? I am not seeing an issue in the HandleHTTPRequest processor with regards to HTTP Header handling. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-12-2024
02:03 PM
1 Kudo
@Chetan_mn While I do not have an install currently of Technical Preview NiFi 2.0 milestone2 release, I used a NiFi 1.18 to build a simple dataflow using HandleHTTPRequest. I then setup an invokeHTTP processor to send a message to to that api endpoint using the PATCH http method. I also include a couple custom headers: displayName=Display1 outerID=123456aBcD When I inspected the received FlowFile from HandleHTTPRequest, I see the FlowFile attributes created from the headers look correct: I suggest you try using an InvokeHTTP processor to test your HandleHTTPRequest processor in Apache NiFi 2.0.0-M2 to make sure your issue is not the result of some external manipulation of the headers before they are received by the HandleHTTPRequest processor. The headers are just create as FlowFile Attribute property names. I am curious how the all lowercase of these property names are impacting your dataflow? Are the values for your headers being modified? Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-12-2024
06:51 AM
1 Kudo
@Techie123 When you say "run it manually", does that mean you simply start the processor and allow it to run continuously or are you right click on the processor and selecting "run once"? How do you have the "scheduling" configured for the processor? I assume you are trying to use cron? Thank you, Matt
... View more
09-11-2024
02:03 PM
1 Kudo
@Leo3103 This exception might be caused by a missing nar file in your MiNiFi. MiNiFi does not include all the NiFi nars. https://github.com/apache/nifi/blob/main/minifi/minifi-docs/src/main/markdown/minifi-java-agent-quick-start.md A quick look through your flow definition I saw you were using the ExecuteSQL processor. While ExecuteSQL is part of one of the included nars, it does require another nar that is not included (this is noted in the above linked quick start guide). You'll need to add the "nifi-dbcp-service-nar" from your NiFi distribution to your MiNiFi's lib directory. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-09-2024
02:36 PM
I think you issue may be with using the SingleUserAuthorizer and Single user login provider. These out of the box providers were built so that NiFi could be HTTPS enabled securely out of the box. They are not designed to support clustering, nor are they suitable for production NiFi. You'll want to configure your NiFi cluster to use a production ready authorizer (managed authorizer) and user authentication method other then single user so you can have granular access controls per user/team. Most common is the ldap-provider. The documentation provides examples for authorizer.xml setup: https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#file-based-ldap-authentication https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#file-based-kerberos-authentication https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#ldap-based-usersgroups-referencing-user-dn ETC.... You cluster is most likely not forming completely due to node to node authentication and authorization issue resulting from using the single user authorizer. In a NiFi cluster the node identities (derived from clientAuth certificates in the Mutual TLS exchange) need to be authorized against some NiFi policies like "Proxy user requests". Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-09-2024
01:52 PM
@hegdemahendra The FlowFile connection back pressure thresholds are soft limits. Once one of the configured back pressure thresholds on reached or exceeded, NiFi will not allow the processor feeding that connection to get scheduled to execute again. So in your case no back pressure is being applied, so the ConsumeAzureEventHub processor is being allowed to scheduled to execute. During a single execution it is consuming more events then the threshold settings. What is the batch size set to in your ConsumeAzureEventHub processor? Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more