Member since
07-30-2019
3406
Posts
1622
Kudos Received
1008
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 159 | 12-17-2025 05:55 AM | |
| 220 | 12-15-2025 01:29 PM | |
| 158 | 12-15-2025 06:50 AM | |
| 265 | 12-05-2025 08:25 AM | |
| 444 | 12-03-2025 10:21 AM |
04-14-2020
09:30 AM
@krishnaraj_v13 The error output is telling you that your NiFi node(s) have not been granted the proxy policy in your NiFi Registry. Your NiFi-Registry policies are managed locally within NiFi-Registry. Your NiFi is setup to use Ranger ti handle authorizations and i see you mentioned you granted your NiFi nodes /proxy in Ranger, but these policies only apply for NiFi and not NiFi-Registry. Based on the authorizers.xml shared from NiFi-Registry, I can see you defined your NiFi nodes as local users in the file-user-group-provider, but did not also configure those nodes in the file-access-provider. The file-access-provider actually created the initial policies in the authorizations.xml file and assigns users to those created policies. Note: Both NiFi and NiFi-Registry will only create the users.xml and authorizations.xml files if they do not already exist. So modifications to these providers in the authorizers.xml file will not result in modifications to these files if they already exist. To resolve the error you are seeing you need to login to your NiFi-Registry with your initial admin user and grant your NiFi nodes the the following policies: 1. "Can proxy user requests". (solves current error) 2. "Can Manage buckets" --> Read. (allows NiFi nodes to read buckets to see if new flow versions have been committed) Hope this helps, Matt
... View more
04-14-2020
07:54 AM
@vivek12 The last known state of the components is written to the flow.xml.gz file. Make sure the NiFi service user owns and proper permissions on this file. Do you have multiple nodes in your NiFi cluster? If so, make sure that property is set to true in every node. All it takes is one node to have it false to cause issues. What version of Apache NiFi are you running? Thanks, Matt
... View more
04-14-2020
07:10 AM
@vivek12 If you NiFi dataflows are all coming up stopped after a NiFi restart, this indicates you have the following property in your nifi.properties file set to false: nifi.flowcontroller.autoResumeState= Make sure that this property is set to true so that all components return to last known state from before NiFi was last shutdown. Hope this helps, Matt
... View more
04-13-2020
09:18 AM
@Aminsh I am not sure where your response fits in to this thread. Are you asking a new question here? I recommend you start a new thread if that is the case. Thanks, Matt
... View more
03-30-2020
01:20 PM
1 Kudo
@venkii You need to login to your secured NiFi-Registry and make sure all your NiFi nodes have been authorized for both the following "Special Privileges": 1. "Read" for "Can Manage Buckets" 2. "Can proxy user requests" Click on wrench icon in upper right corner to manage your users in NiFi-Registry. Then find your NiFi nodes in the list of USERS and click on the "manage user" pencil icon to the far right side. Hope this helps, Matt
... View more
03-27-2020
01:39 PM
1 Kudo
@Petr_Simik No matter which processor you are looking at the stats presented all tell you the same information: In <-- Tells you how many FlowFile were processed from one or more inbound connections over the last rolling 5 minute window. With this processor you have it configured the "wait mode" to leave the FlowFile on the inbound connection. So the processor is constantly looking at the file over and over again until the configured expiration time has elapsed. Read/Write. <-- Tells you how much FlowFile content was read from or written to the NiFi content repository (helps user identify processors that may be disk I/O heavy) Out. <-- Tells you how many FlowFiles have been released to an outbound connection over the last rolling 5 minute window. Here you see a number that reflects only those flowfiles that expired and where sent to your outbound expired connection. Tasks/Time. <-- Tells you how many threads this processor completed execution over the last rolling 5 minutes and the total cumulative time those threads consumed from the CPU. (helps user identify what processors consume lots of CPU time) So the stats you are seeing are not surprising. While this processor works for your use case i guess, it has overhead needing to connect to a distributed map cache on every execution against an inbound FlowFile. If your intent is only to delay a FlowFile for 1 second before it proceeds down the flow path, a better solution may be to just use an updateAttribute processor that creates an attribute with current time and RouteOnAttribute processor that checks to see if that recorded time plus 1000 ms is less than current time. Then loop that check until it is not. Hope this helps, Matt
... View more
03-27-2020
01:20 PM
@NY Anything you can do via the NiFi canvas, you can also accomplish via the rest-api. So you can create a queue listing via a rest-api call: curl http://<nifi-hostname>:<port>/nifi-api/flowfile-queues/<connection-uuid>/listing-requests The above call will return a response that includes the URL you need to use to retrieve those results. for example: http://<nifi-hostname>:<port>/nifi-api/flowfile-queues/<connection-uuid>/listing-requests/1d98d557-0171-1000-ffff-ffffd559ca47 Then query for the listings results curl http://<nifi-hostname>:<port>/nifi-api/flowfile-queues/<connection-uuid>/listing-requests/1d98d557-0171-1000-ffff-ffffd559ca47 This return a json with all the FlowFiles from that specific connection queue from all nodes in your NiFi cluster. That json would need to be parsed for info like nodes where the "lineageDuration" epoch time is x amount of time older then now, the "clusterNodeAddress" (which node holds the file), and maybe filename". and then delete the queue listing when done. (this is important or it stays around using heap space). curl -X DELETE http://<nifi-hostname>:<port>/nifi-api/flowfile-queues/<connection-uuid>/listing-requests/1d98d557-0171-1000-ffff-ffffd559ca47 Hope this helps, Matt
... View more
03-27-2020
12:56 PM
1 Kudo
@venk What you have run into at this point is a known issue. Your cluster was originally setup and running unsecured over HTTP port 8080. NiFi records the details of the nodes that are part of the cluster. It does that so on later restarts it know that it should still be waiting on additional nodes to join before allowing users to make changes to the canvas. The downside to this is that when you switched to being secured over HTTPS on port 9091, the cluster now thinks you should have twice the number of nodes as there really are. But this is an easy fix. Within your NiFi's conf directory you will find the file "state-management.xml". Inside that file you will find a section for NiFi's "local-provider" that will contain the directory where you can find your local state. This path is normally the same on every node. Shutdown your NiFi and go to this directory on every node in your cluster and delete the contents within that state directory. Restart your NiFi and it will only create new entries for your secured nodes. https://issues.apache.org/jira/browse/NIFI-7255 Hope this helps, Matt
... View more
03-25-2020
02:37 PM
@Faerballert Perhaps you clone your flowfile before the mergeContent processors. So whichever relationship you are connecting to your current mergeContent, you drag a second connection containing that same relationship to a parallel notification flow. Down this parallel flow path you use a replaceText processor to replace the content with the value from the attribute you want to merge. Then you use a mergeContent processor on this path to merge these files using a "," as your delimiter. Then from this mergeContent you do you notification. You may also want to open an Apache Jira with your use case and desired improvement for the existing mergeContent. The more details the better. Hope this helps, Matt
... View more
03-24-2020
12:24 PM
@Koffi When a NiFi node attempts to connect to an existing NiFi cluster, there are three files that are checked to make sure they match exactly between the connecting node and the existing copies in the cluster. Those files are: 1. flow.xml.gz 2. users.xml (will only exist if NiFi is secured over https) 3. authorizations.xml (not to be confused with the NiFi authorizers.xml file. Will only exist if NiFi is secured over https) The output in the nifi-app.log of the node should explain exactly what the mismatch was the first time it tried to connect to the cluster. Hope this helps, Matt
... View more