Member since
07-30-2019
3421
Posts
1628
Kudos Received
1010
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 149 | 01-13-2026 11:14 AM | |
| 266 | 01-09-2026 06:58 AM | |
| 552 | 12-17-2025 05:55 AM | |
| 613 | 12-15-2025 01:29 PM | |
| 570 | 12-15-2025 06:50 AM |
09-05-2024
08:45 AM
1 Kudo
@yagoaparecidoti I think there is still confusion here on what you are really doing. Templates are in XML format and not JSON. There is not option to download a template in JSON format. Templates and Flow Definitions are two different things (templates are deprecated and fully removed as of Apache NiFi 2) In Apache NiFi 0.x and 1.x versions you can write click on the canvas or on a selection of components and select "create template" from context menu displayed or by clicking teh create template option from the operate panel This results in the creation of an XML template that you need to navigate to the template UI under the NiFi global menu in order to download the XML template. Flow Definitions are in Json format and can be created by right clicking on process group or the canvas of the process group and selecting "Download Flow Definition". This prompts the creation of flow definition json you store outside of your NiFi. A Flow i Json is uploaded to NiFi by dragging the Process Group icon to the canvas and clicking on the browse icon to select your json for upload to the UI. Flow definitions are NOT stored within your NiFi. I would advise strongly against using NIFi templates as they will become unusable in the newer versions of NiFi. Templates where deprecated for two reasons: 1. They are created and held in NiFi which means they consume NiFi heap memory space until they are deleted via the template UI. 2. They use XML format. NiFI has moved away from xml based flows. The flow.xml.gz is also deprecated with the flow.json.gz format. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-05-2024
07:55 AM
@Salmidin Please create a new community question with the details of your issue. It seem unrelated to this thread issue. The "Policies" missing from the global menu indicates your NiFi is either using Single User Authorizer (default out of box) or is not setup to be secure. NiFi needs to be configured with a production ready authorizer for the "Polices" and "Users" options to be visible in the NiFi Global menu. Fell free to ping me @MattWho in your new community question. Thanks, Matt
... View more
09-05-2024
07:45 AM
1 Kudo
@NagendraKumar The image you shared indicates that your PublishKafka processor is producing a bulletin. What is the nature of the exception being reported in that bulletin? I also see what appears to be only one connection exiting your PublishKafka processors. The publishKafka processor has multiple relationships. Are you auto-terminating the failure" relationship? If so, i never recommend doing that. ----------- Now when it comes to monitoring queue sizes/thresholds, you could use the ControllerStatusReportingTask NiFi reporting task to output these stats for all connections to a dedicated logs (see Additional Details... for how to setup dedicated logs via NiFi's logback.xml). You can then create a NiFi dataflow that tails the dedicated connection log and parses the ingested log entries for connections where thresholds exceed your 80% threshold and route those to a putEmail processor for yoru notification needs. (preferred) Another option here is to use the SiteToSiteStatusReportingTask to report specifically on NiFi connections and feed that data in to a dataflow that parses for threshold in excess of 80% and route those to a putEmail processor. This method has less overhead as it does not write to a NiFi log, require tailing logs, can be setup to report on connections, and reports in a more structured format (see Additional Details... ). ---------- From your image I can also see your PublishKafka processor reporting 3 actively running threads. You mentioned the processor becomes hung? Have you analyzed a series of thread dumps to identify where it is getting hung? NiFi also offers a way to monitor for long running tasks: Runtime Monitoring Properties You could use this in conjunction with the SiteToSiteBulletinReportingTask to construct a dataflow that could send an email alert when task are detected on processor that have been running in excess of the configured threshold. This runtime monitoring does have an impact on your overall NiFi performance due to the overhead needed to run it. So if you find it impacts your throughput performance negatively, you'll need to stop using it. __________ Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-03-2024
01:34 PM
2 Kudos
@wasabipeas @Adhitya In the thrown exception it reports which node has the mismatched revision. Is that node the currently elected cluster coordinator? Have you tried on just that reported node: 1. Stopping NiFi 2. Remove or rename both the flow.xml.gz and flow.json gz files (only deleting one will not work) 3. Restart that NiFi node. It will inherit the flow from the cluster coordinator when it joins. If this node was the elected cluster coordinator, when you shut it down another node will assume the cluster coordinator role. ---- Another option is to disconnect the node reporting the mismatch in revision. Then from the same cluster UI used to disconnect that node, select to drop/delete it from cluster. Your cluster will now report one less node. See if you can then move the process group or if it reports another node with mismatched revision? NOTE: Deleting/Dropping a node from cluster using the Cluster UI does nothing to that node. If you restart that node that was deleted/dropped it will rejoin the cluster again. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
08-08-2024
01:11 PM
1 Kudo
@yagoaparecidoti NiFi templates have been deprecated in Apache NiFi. NiFi flow definitions are the replacement. The ability to create and import templates no longer exists as of Apache NiFi 2.x releases. NiFi templates as well as NiFi flow definitions exist in the Apache NiFi 1.x releases. The first question is what do you mean by "full template"? One of the best ways to navigate the rest-api calls needed to accomplish any task is through the use of the developer tools available in your browser. Open the developer tools and execute the steps via the NiFi UI to accomplish your use case. With each step you can capture the rest-api requests that are being made. Developer tools will even allow you right click on request and select "copy as curl". This will allow you to see the rest-api endpoint and the added headers and raw-data that may go with the request. The browser will add numerous additional headers that are not needed. @vaishaakb Has provided some other community articles that are a good resources. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
08-06-2024
05:58 AM
@DeepakDonde Please share more details on your "headless" NiFi setup. NiFi version and configuration. Or are you using MiNiFi?
... View more
08-06-2024
05:33 AM
@CDC- I encourage you to start a new community question rather then adding to an existing question with an accepted answer. Your query is really unrelated to this question. Something appears to be happening to your content prior to even reaching the PutDataBaseRecord processor. I say this because the exception shared indicates the processor is looking for the content in an "archived" content claim. Content claims are only moved to archive once the claimant count is zero (meaning no actively queued FlowFiles are still referencing content on that claim). Any content claims moved to archive are subject to removal/deletion by the background archive clean-up thread. So not surprised the content is missing. The real question here is what is the lineage of this FlowFile and at what point upstream from your putDatabaseRecord processor did the problem develop. Please start a new community question and provide as much detail as possible. Thanks, Matt
... View more
08-06-2024
05:18 AM
1 Kudo
@akash007 The ConsumeJMS processor configuration has an option to select a StandardRestrictedSSLContextService which would be configured with the keystore and/or truststore needed to facilitate your TLS connection with a secured JMS endpoint. For one-way TLS, You'll only need to configure the Truststore properties. If mutual TLS is needed, you'll need to configure both the keystore and truststore properties. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
08-05-2024
09:00 AM
@DeepakDonde I may not be completely clear on what you have tried above. I am not sure what you mean by "completely new process group". The rest-api endpoints used in both commands above are the same except each has a unique process group UUID: 1e373804-0191-1000-e950-856de82e267d <-- worked 01911000-3804-1e37-10a0-827d946513c6 <-- did not work The second throws an exception that implies the process-group with that UUID does not exist. That uuid needs to be the uuid of the process group in which you are uploading your flow definition. A flow definition will consist of a Process Group (PG) that normally contains components. I am guessing that the UUID you shared that did not work is some random uuid you created? When you flow definition is uploaded to the NiFi UI, its PG and components are assigned UUIDs. You can upload the same flow definition multiple times and each time all components get new UUIDs. NiFi Flow Definitions can only be created by right clicking on a Process Group and selecting "Download flow Definition" from the displayed context menu or via the equivalent rest-api endpoint. That flow definition is stored as a json file which can later be uploaded to the canvas of the same NiFi or to another NiFi. A flow definition is uploaded only by specifying the process group you want upload the flow definition to and the x and y coordinates at which the flow definition's Process Group will be placed. It is not possible to upload a flow definition without this valid information. Even a freshly installed NIFi will generate a root process group on first startup. When you access the UI of a new NiFi install, the blank canvas you are presented with is the root process group and has an assigned UUID (uuid is shown in the "operate" panel in left side of the canvas). Form that root process group you can add many levels of chid process groups and can upload a flow definition to the root process group or any child process group level. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
08-02-2024
08:33 AM
1 Kudo
@Adyant001 The JsonQueryElasticSearch processor does not store state. If the processor has no inbound connection, it will be scheduled to execute its code using the configured properties and scheduling configured. So upon every execution it is going to make the same same query to ElasticSearch. This processor can also be triggered by an inbound FlowFile. So you would have some upstream dataflow to build a query in to its content and that FlowFile is fed to the JsonQueryElasticSearch processor to fetch that specific result. The processor would then not execute again until it received another inbound FlowFile to trigger the execution. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more