Member since
07-30-2019
3172
Posts
1571
Kudos Received
918
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
96 | 02-14-2025 08:43 AM | |
101 | 02-12-2025 10:34 AM | |
467 | 02-06-2025 10:06 AM | |
143 | 01-31-2025 09:38 AM | |
127 | 01-30-2025 06:29 AM |
02-14-2025
08:43 AM
@hus Thank you for the clarification on your use case. The only purpose built processor NiFi has for appending lines to an exsisting file is the putSyslog processor. But it is designed to work with syslog formatted messages being sent to a syslog server for RFC5424 and RFC3164 formatted messages and can't be used to append directly to a local file. However, your use case could be solved using the ExecuteStreamCommand processor and custom script. The ExecuteStreamCommand processor passed a FlowFile's content to the input of the script. Example: I created the following script which I placed on my NiFi node somewhere where my NiFi service user has access. Gave my NiFi service user execute permissions on the bash script (I named it file-append.sh) #!/bin/bash
STD_IN=$(</dev/stdin)
touch $1/$2
echo "$STD_IN" >> $1/$2 This script will take stdin from the executeStreamCommand processor which will contain the content of the FlowFile being processed. $1 and $2 are command arguments i define in the ExecuteStreamCommand processor which I use to dynamically define the path and filename the content will be appended to . It then takes FlowFiles content and either start a new file or append to a file if it already exists with the passed filename. You can see that I set my two command arguments by pulling values from the "path" and "filename" NiFi FlowFile attributes set on the FlowFile being processed. With this dataflow design you can append lines to various files as needed. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
02-13-2025
01:22 PM
@OfekRo1 I looked at the StandardProvenaceEventRecord source in Github plus I know many of the open source contributors. 😀 https://github.com/rdblue/incubator-nifi/blob/master/commons/data-provenance-utils/src/main/java/org/apache/nifi/provenance/StandardProvenanceEventRecord.java Your welcome and thank you for being part of the community!
... View more
02-13-2025
09:56 AM
@mks27 What you are trying accomplish is not possible in NiFi. In my 15 years of working with NiFi, I believe this is first time I have seen such a request. So what you are expecting to happen is NiFi presents the login window and a user supplies a username and password. You then expect NiFi to attempt authentication via one ldap provider and if that results in unknown username or bad password response, move on to next ldap provider an attempt again? The users that will need access to your NiFi don't all exist in just one of your ldaps? I suppose if you have a multi node NiFi cluster setup, you could configure the ldap-provider on one node to use one of the ldap servers and the ldap-provider on another node to use the other ldap server. Since the NiFi cluster can be accessed from any node, you would just need make sure your users access the NIFi cluster from the appropriate node that is configured with their ldap server. NOTE: Authorization (happens after successful authentication) need to be identical on all nodes in a cluster, but that is not an issue here. You'll just configure the authorizers.xml so that all user and group identities from both ldaps are authorized appropriately. This bootleg way of facilitating authentication via multiple LDAPs, is not something I have ever tested/tried, but believe would work. You could also raise an improvement jira in Apache Jira NiFi project to see if the community might be interested in implementing this change, but I don't anticipate there is much demand for it. https://issues.apache.org/jira/browse/NIFI Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
02-13-2025
07:52 AM
@hus So what i am understanding is you do not want to overwrite the existing file, but rather update an existing file. Is my understanding correct? Can you share more detail or example of what you are trying to accomplish? Are you looking for a way to search the content of an existing file for a specific string and the replace that string with a new string? NiFi processors are designed to perform work against FlowFiles contained within NiFi, but there are processors that can be triggered by a FlowFile that can run a script against files outside of NiFi. You could also ingest the file, modify its content and the write out the newly modified file. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
02-13-2025
06:26 AM
@mks27 What you are trying to do above is not possible in Apache NiFi. Apache NiFi only supports defining one login identity provider nifi.security.user.login.identity.provider It does not support a comma separated list of multiple login providers, so what is happening is NiFi is expecting to find a login provider in the "login-identity-providers.xml" file with: <identifier>ldap-provider-1, ldap-provider-2</identifier> which does not exist and thus the error you are seeing. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
02-13-2025
06:18 AM
@hus The PutFile processor has a conflict resolution option of "replace". When used and a file of the same filename already exists in target directory, it will be replaced with the file being written by putFile processor. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
02-13-2025
06:14 AM
@mridul_tripathi The best way to check if two files have the exact same content is to generate a hash of the content and then compare that those two hashes to see if they are the same. While comparing hash values allows you to detect if the content is the same between NiFi FlowFiles, it sounds like you want to determine what is different and not just that they are different? NiFi does not have a processor that is designed to do this function. So what is the full use case here? SFTP1 is source of truth always expected to have correct content? SFTP2 is the backup or expected to have content that matches SFTP1? Example use case: You could pull a file from SFTP2 (File to be verified), create a FlowFile attribute containing the hash of this file (Hash-SFTP2), then zero out the content (Modify bytes), then pass this FlowFile to a FetchSFTP (used to fetch file of same filename from SFTP 1), create another FlowFile attribute (hash-sftp1), Now you can use a RouteOnAttribute that compares the two hash attributes to see if they are the equal. If false, route the FlowFile to PutSFTP to overwrite the file on SFTP2 withe FlowFiles current content from SFTP1 so that both SFTP server now have matching content for this filename. Now if your use case is to somehow output a FlowFile containing all the difference in the content, that is more challenging and would likely require something custom (custom processor or some custom script) Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
02-12-2025
10:34 AM
@fy-test The NiFi node that disconnects due to a flow mismatch should inherit the cluster flow when it attempts to rejoin the cluster. The only time this is not possible is if the cluster flow includes change that would result in dataloss. Example: Cluster flow has a connection removed that on the connecting node still has queued FlowFiles. NiFi has not feature to force removal/archive of a flow.json.gz on a disconnected node. You could file an Apache NiFi improvement jira here: https://issues.apache.org/jira/projects/NIFI But first step is to identify why your node is not able to inherit cluster flow and rejoin the cluster. What is the exception logged when it attempts to rejoin the cluster? Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
02-12-2025
10:24 AM
@Darryl The ListenUDP processor will listen on a specific port and can be configured to listen on a specific network interface on the NiFi host. The question is what network interface on the NiFi host is assigned and address associated with your multicast 224.1.1.1 ip address? You can't bind any NiFi processor to a specific IP address. NiFi supports multi-node clusters where every node runs the exact same dataflows. Lets assume your multicast group is sending datagrams to port 10000 on the eth3 network interface on your NiFi host, then you would configure the listenUDP to create a listener using port 10000 for eth3. The Site to Site properties from the nifi.properties have absolutely nothing to do with any processor components you add to the NiFi UI. Site-to-Site (S2S) NiFi protocol is used for the transfer of NiFi FlowFiles between NiFi nodes. This S2S protocol is utilized by Remote Process Groups and Load Balanced connections capability only. You'll want to unset 10000 in the S2S settings since you want your listenUDP processor to bind to that port number. You'll also want to unset 224.1.1.1 as the S2S input host. S2S can't bind to host that does not resolve to one of the NiFi network interfaces. S2S reference documentation: Site-to-Site Site to Site Properties Please help our community grow and grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
02-11-2025
09:34 AM
@nifi-srinikr The exception you shared is complaining about a null value for the "flowConfigurationFile": Caused by: java.lang.NullPointerException: Cannot invoke "java.io.File.exists()" because "flowConfigurationFile" is null On NiFi startup NiFi will check if the flow configuration file that NiFi is configured to use exists. If it does not, NiFi will create it. In your case it appears this property is null and this the if exists check is failing. The configuration file location and filename is specified by this property in the nifi.properties file: nifi.flow.configuration.file=<path to>/flow.xml.gz (Apache NiFi 1.x releases) or nifi.flow.configuration.file=<path to>/flow.json.gz (Apache NiFi 2.x releases) What version of Apache NiFi are you using? Please help our community grow and trhive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more