Member since
07-30-2019
3421
Posts
1628
Kudos Received
1010
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 114 | 01-13-2026 11:14 AM | |
| 228 | 01-09-2026 06:58 AM | |
| 524 | 12-17-2025 05:55 AM | |
| 585 | 12-15-2025 01:29 PM | |
| 565 | 12-15-2025 06:50 AM |
07-18-2024
05:40 AM
@Ali_12012 The InvokeHTTP processor utilizes the OkHTTP client library. This library does not support a body in a get request: https://github.com/square/okhttp/issues/3154 I am not familiar myself with what other client libraries exist that support this method, but guessing there must be some out there since postman handles this for you. The script process allows you to create a custom code that can use whatever client libraries you want. You could also build you own custom processor that utilizes some other client that may be able to identify that supports get with a body. Sorry I can't be of more help here. As @SAMSAL shared, there are reason why this is not supported or standard convention. Postman does not adhere to those standards and lets you do what you want. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
07-17-2024
10:20 AM
1 Kudo
@PriyankaMondal In version of Apache NiFi older then 1.16, NiFi does not allow any edits within the NiFi cluster while a node is disconnected. Changes are only allowed on the actual disconnected node. In Apache NiFi 1.16.0 NiFi introduced a new flow inheritance feature that allowed joining nodes with an existing flow.xml.gz/flow.json.gz that does not match the cluster elected flow to join the cluster by inheriting the cluster elected flow. A joining node would only be blocked from this process if the inheritance of the cluster flow would result in dataloss (meaning the joining node's flow contains a connection holding queued FlowFiles and the cluster elected flow does not have that connection). Later it was determined that this change can make it difficult handle the outcome of above issue. https://issues.apache.org/jira/browse/NIFI-11333 So it was decided that the best course of action was not allow any component deletion while a node is disconnected. When a NiFi node is started it attempts to join that node to the cluster. If the nodes fails to join the cluster, it shuts back down to avoid users from mistakenly using it as a standalone node. That means user had no easy way to handle the queued data in connection preventing the rejoin. Of course users could configure the node to come up standalone, but that does not make things any easier on the end user. The node loads up standalone, loads its FlowFiles and depending in whether auto.resume was set or not, start processing FlowFiles. This still leaves the user with FlowFiles queued in many connection all throughout the UI would have a very difficult time determining which connection(s) were removed and need to be processed out in order to rejoin the cluster. So decision was made to stop allowing deletion when a node is disconnected. That being said, when a NIFi cluster has a disconnected node, users can decide to navigate to the cluster UI and drop the disconnected node(s) from the cluster. The cluster will now have full functionality again as it will report all existing nodes as connected. It will require a restart of the dropped node(s) to get them to attempt to connect to the cluster again. But keep in mind that when it attempts to join cluster and inherit the cluster flow you may run into the problem described above. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
07-16-2024
05:43 AM
@3ebs The "Insufficient Permissions Untrusted proxy CN=Node_name,OU=NIFI" shown in the webui when you try to login is not an error. It is an authorization issue. It tells me that you have a multi-node NiFi cluster setup. You are accessing the UI of one of the NiFi cluster nodes where you are successfully authenticating your user resulting the a user identity of "AMOHAMED279". At this point your user is only successfully authenticated to the one node. What that node does next is to load the NiFi canvas. In order to display that canvas, information that the user is authorized to see (PG, stats, etc) must be collected from all nodes. That requets is forwarded to the elected cluster coordinator node which then replicates that request to all nodes to get those details. So the node itself acts as a proxy in this process making these requests on the authenticated users behalf. In order for this to be successful, the NiFi nodes in your cluster must be authorized to proxy user requests. This message is telling you that one or more of your node identities has not been authorized to proxy user requests. To help here more, I would need to know what you have configured in the authorizers.xml for user identity authorization. The most common NiFi cluster setup utilizes the standardManagedAuthorizer which calls the file-access-policy-provider (builds the authorizations.xml if it does not already exist) which call one of the user-group-providers (There are multiple options: Composite-Configurable-User-Group-Provider, Composite-User-group-Provider, Ldap-User-Group-Provider, File-User-Group-Provider, etc.). The user-group-providers are responsible for generating user identities (case sensitive) for the purpose of setting up authorization policies. The file-user-group-provider is most commonly used to add the node user identities by creating the users.xml (if it does not already exist). So somewhere in your authorizers.xml setup, your node user identities have not been added and/or authorized for various policies to include the very important "proxy user requests" which would have been automatically handled on initial startup and first creation of the authorizations.xml and users.xml files assuming a proper setup in the authorizers.xml. Resources: Authorizer Configuration FileUserGroupProvider LdapUserGroupProvider Composite Implementations FileAccessPolicyProvider StandardManagedAuthorizer Configuring Users & Access Policies Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
07-16-2024
05:18 AM
1 Kudo
@PradNiFi1236 Not much information provided here for investigation use. What is the jar that is causing issue? How is the jar execution being invoked? What is the full exception being encounter (is there a stack trace with the exception?) If you install JDK 1.8.0_312 and launch Apache NiFi 1.17 using that JDK version, does the issue persist? Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
07-15-2024
09:14 AM
1 Kudo
@PradNiFi1236 Another option might be to have two listFile processors. ListFile one is configured with a file filter so that it is only looking for the trg file. Once the .trg file is listed it feeds an InvokeHTTP processor the you use to start listFile two processor via NiFi rest-api call that is configured to list all the files including the .trg file. Then ListFile two feeds FetchFile to get each files content. Then somewhere in this dataflow you use another invokeHTTP processor to invoke a NiFi rest-api call stop listFile two processor. So you have two different dataflows in above example. With one watching for the triger file and using it to start dataflow 2. REST API - NIFi 1.26.0 REST API - NiFi 2.x -------- Another option requires you to create a custom processor or use a scripting processor to perform a complete listing when a trigger file is received. The trigger file comes from am upstream processor like ListFile (configured to only consume .trg files). The trg file in conjunction with "path" attribute is used in your custom processor to list all files from that target path. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
07-15-2024
08:53 AM
1 Kudo
@carlosst With a NiFi cluster, each node runs its ow copy of the flow.json.gz into memory on startup. As such processors like listeners will when started create a listener on each node. http://{NiFi node hostname}:{port}/contentListener There is no property to set the hostname as NiFi uses the hostname unique to each NiFi node here. All you configure is the port (must be an unused, non-privileged (>1024) ) and Base path (default set to "contentListener"). So any requests received by the listener on a specific node will be processed by that specific node unless you programmatically via your flow redistributed those requests to other nodes via load balanced configured connection. What is commonly done here is to have an external load-balancer in front of your NiFi cluster that handles distributing requests across all your listeners running on the different cluster nodes. NOTE: when using an external load-balancer in front of NiFi's UI URL, you must configured session affinity (sticky sessions) in that load balancer. This is not necessary if you are only using the external LB for endpoints like listener since those end points do not use token based authentication) Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
07-15-2024
08:33 AM
@Sunny9 What version of CDP is installed? Is it CDP 7.1.7 or newer as required by CFM 2.1.7? Was parcel and CSDs installed on CM host? Did you verify proper permissions and ownership on the CFM Parcels and CSDs? Parcels are typically owned by root with 755 permissions and are located in /opt/cloudera/parcels/ folder. CSDs are owned typically by "cloudera-scm" user with 644 permissions. (there are two jars for CFM: 1 NiFi and 1 NiFi-Registry). Add-on CSDs are added typically to /opt/cloudera/csd/ folder. Make sure you did not add them to the /opt/cloudera/cm/csd/ folder by mistake. Make sure you have Distributed and Activated the CFM parcel from within Cloudera Manager. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
07-15-2024
08:14 AM
1 Kudo
@Ali_12012 The documentation for InvokeHTTP states that only POST, PUT and PATCH http methods will sent with a body. The processor does not support sending a body with GET http method. Only supports headers. You may need to build a custom processor for your use case or perhaps use one of the scripting processors to accomplish your use case. Thank you, Matt
... View more
07-11-2024
09:28 AM
@kellerj CFM has several Service pack versions released for 2.1.5, as well as newer CFM 2.1.6 and CM 2.1.7 versions. If you open the cluster UI via the NiFi UI --> global menu upper right corner) and then click on the "View Details" icon to far left of node that is disconnecting, what Node Events are being reported? Matt
... View more
07-02-2024
01:00 PM
1 Kudo
@enam Have a slight mistake in my NiFi Expression Language (NEL) statement in my above post. Should be as follows instead: Property = filename
Value = ${filename:substringBeforeLast('.')}-${UUID()}.${filename:substringAfterLast('.')} Thanks, Matt
... View more