Member since
07-30-2019
3421
Posts
1624
Kudos Received
1010
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 65 | 01-13-2026 11:14 AM | |
| 201 | 01-09-2026 06:58 AM | |
| 522 | 12-17-2025 05:55 AM | |
| 583 | 12-15-2025 01:29 PM | |
| 563 | 12-15-2025 06:50 AM |
06-06-2024
07:12 AM
@alan18080 NiFi-Registry only pushes to the GitFlowPersistenceProvider while running. NIFi-Registry will only read from Git on startup. The GitFlowPersistence Provider also only contains the flow definitions for the version controlled process groups. Each NiFi-Registry has a metadata database maintains the knowledge of which buckets exist, which versioned items belong to which buckets, as well as the version history for each item. So if you are trying to share a single Git Repo across multiple running NiFi-Registry instances this will explain why you are seeing missing versions at times across your multiple instances. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
06-04-2024
10:02 AM
1 Kudo
@yuanhao1999 I see that you raised an Apache Jira for this same issue. https://issues.apache.org/jira/browse/NIFI-13340 and that your issue is likely related to: https://issues.apache.org/jira/browse/NIFI-13281 When you delete and re-create import the Process Group from NiFi-Registry, all your components will get new random UUIDs assigned to them. That effectively eliminates the stuck condition. Where changes being made to process group configuration while FlowFile(s) were still queued in a connection within the Process Group? Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
06-04-2024
09:54 AM
@inkerinmaa Out of the box Apache NiFi is configured to be secure. and Most browser do not support HTTP anymore and force redirect to HTTPS. NiFi is going to come up in secured if you have the HTTPS port property configured in the nifi.properties file. So you would need to unset that property for NiFi to start unsecure. Thanks, Matt
... View more
06-04-2024
09:03 AM
@Alexy Are you specifically needing to produce so much logging? What loggers do you have added to your logback.xml? How many are set to "INFO" level logging? If you only want to log exceptions, you could change the "INFO" to "WARN" or "ERROR" to greatly reduce mount of INOF logging being produced. As far as NiFi performance goes, it is all about managing CPU Load average and Disk I/O (Specifically disk I/O of the disks where NiFi's content, flowfile, and provenance repositories are located). You could make sure your logs are being written a separate disk to elevate that Disk I/) form impacting NiFi's repos disks. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
06-03-2024
11:09 AM
@inkerinmaa An Apache NiFi multi-node clustered setup is much different then a standalone NiFi installation. Your exception is related to a TLS exchange trust issue going on between your nodes. In a NiFi cluster one of the nodes will be elected to the role of "cluster coordinator" by Zookeeper (ZK). All of the nodes will communicate with ZK in order to learn which node is currently assigned to this role and then begin sending heartbeats to that elected node in order to join the cluster. It looks like you are just allowing your NiFi nodes to auto generate their own self-signed certificates on each node? Works fine to do this in a standalone NiFi setup; however, you'll need to create keystores and truststores for your NiFi cluster nodes so that proper mutual trust can be established. I also see that your are using the Single-User login provider and authorizer. For a NiFi cluster you'll also want to be using more production ready providers like the ldap-provider for login and the StandardManagedAuthorizer for all your authorizations. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
05-30-2024
01:27 PM
@scoutjohn I installed an out-of-the-box Apache NiFi 1.26 using single user providers and the NiFi self-signed generated certificates. I was able to send provenance events via the S2SProvenanceReportingTask successfully back to a Remote Input Port on the same NiFi with no issues. So authorization is not an issue here. I tested using both HTTP and RAW transport protocols successfully. I also validated that S2S was working by setting up a Remote Process Group to send FlowFiles to a Remote Input port as well. Here is the dataflow I setup: You can see in the above that i generated some FlowFiles that were sent over S2S to the "Input1" remote port. You can also see that my "prov" port received provenance events from the S2SProvenanceReportingTask. My S2S setting from nifi.properties file: # Site to Site properties
nifi.remote.input.host=localhost
nifi.remote.input.secure=true
nifi.remote.input.socket.port=10001
nifi.remote.input.http.enabled=true
nifi.remote.input.http.transaction.ttl=30 sec
nifi.remote.contents.cache.expiration=30 secs My Remote Process Group configuration: Switching to "HTTP" transport protocol also worked. S2SProvenanceReportingTask configuration: While all of this worked correctly, sending provenance events via the S2SProvenanceReportingTask back to the same NiFi is not advisable. It creates an endless loop of provenance events. For every FlowFile received on the "prov" port another provenance "RECEIVE" event is created which then gets set by the reporting task. This an infinite loop is created. You would certainly have difficulty related to authentication and authorization sending to another NiFi instance using the out-of-the-box keystore, truststore, and single user providers between two out of the box NiFi deployments. But for testing purposes this works. Now I see from your configuration you setup: nifi.remote.input.host=cd8e8c899db6 Makes me wonder if that given hostname is: A SAN entry in the NiFi generated keystore certificate. You could use keytool command to check. keytool -v -list -keystore keystore.p12 That hostname is resolvable and reachable by your NiFi instance. Try changing that property to "localhost" see if it resolves your issue. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
05-30-2024
10:00 AM
@hegdemahendra The small number in upper right corner of any processor shows the number of active threads at time the UI was last refreshed. The default auto refresh of the UI is every 30 seconds. It turns red when their is an active terminated thread. So with your example above 2(1), it is telling you that this processor as 2 active threads and 1 terminated thread. A terminated thread is the result of manual user intervention. When a processor asked to change run-status from "running" to "stopped", (Stopping Component) it first transition into a state of "stopping". It does not transition to "stopped" until all active threads complete. NiFi provides and option to "terminate" when in a stopping state because of active threads. Terminate (Terminating a components tasks) does not kill that active thread since all thread belong to a single JVM. What the terminate function does is release any FlowFile tied to the active thread(s) back to their originating connection and marks the thread as terminated. That terminated thread will continue to execute until it completes or the JVM is restarted. Should that now "terminated" thread complete, all output is sent to dev null instead of resulting in any down stream movement. This allows users to handle scenarios where there are long running threads or hung threads preventing the stopping, changing of configuration, and starting of a processor. When a terminated processor is restarted it will re-process the FlowFile(s) that were originally tied to the terminated thread(s). This prevents any data loss from occurring. If a terminated thread is in a permanently hung state, the only way to get rid of it completely is a restart of NiFi which will kill the JVM after a graceful shutdown period. As far as your custom processor getting stuck, you would need to collect thread dumps and inspect those to see what your thread is waiting on that is blocking it from progressing and address that in your custom code. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
05-30-2024
06:17 AM
@Vikas-Nifi @ckumar is 100% correct. Only fields explicitly marked as supporting NiFi Expression Language (NEL) can support a NEL expression "${schedule}". I am however curious about your use case as to why you would even being trying to do this. From what you shared you are extracting a cron schedule from the json content of some FlowFiles traversing an EvaluateJsonPath processor. That "schedule" is added on to the NiFi FlowFile as a FlowFile attribute (key=value pair). This would not make that key=value pair accessible to any other NiFi component unless that FlowFile containing the FlowFile attribute was processed by that other component. However, in your shared dataflow you do not mention that EvaluateJsonPath connects to your invokeHTTP processor via an inbound dataflow connection (Keep in mind that even if you did do this, it does not change the fact that the run schedule property does not support NEL). I just wanted to clarify how FlowFile attributes are and can be used. Also keep in mind that the "run schedule" is a scheduler only. The run schedule set on a processor controls when the NiFi controller will schedule the execution of the processors code. It does not mean that they the processor will immediately execute at time of scheduling (It may be delayed on execution waiting for an available execution thread from the thread pool). All scheduled components share a thread pool and NiFi framework will also handle assigning threads to next scheduled component as thread become available. So the NiFi framework needs to know the scheduling for a component when it is started; otherwise, NiFi would never know when to schedule it to execute. Unless a component property has an explicit tool tip that tells you it support NEL, then it does not. For NiFi processor components, you will find that only some processor specific properties within the "PROPERTIES" tab support NEL. This is not only available through property tooltips, but also in the processors documentation. Examples: Even when NEL is supported there is a scope. It may support FlowFile attributes, Variable Registry (going away in NiFi 2.x releases), or both. Thank you, Matt
... View more
05-29-2024
11:23 AM
1 Kudo
@Naveen_Sagar The Bearer token is issued by a specific NiFi node for a specific user identity. That Bearer token has a limited life time and can not be used to authenticate a user on any other NiFi node (even one in the same cluster as the original node that provided the bearer token). All rest-api endpoints will require some level of authorization. So simply having a valid bearer token for an authenticated user identity, does not mean that user is authorized to access/interact with every rest-api endpoint. In your case, the user would need "operate the component" or "view the component" and "modify the component" authorizations in order to change the run-status. You should inspect the nifi-user.log on the aaa.com nifi server to see what user identity attempted to change the runs-status on that node and was not authorized. Then verify the necessary authorization is setup for that user identity and try your curl command again. And make sure as @ckumar pointed out that in his curl example that you are using the "-k" flag which allows curl to auto trust the serverAuth certificate presented in the TLS exchange with your secured NiFi. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
05-29-2024
05:42 AM
@scoutjohn The article you are using for reference was written back in 2016 before NiFi was changed to starting secure out of the box. It is written entirely around that unsecured NiFi example. You could always unsecure your NiFi and test out S2S capability. That would atleast allow you to test/evaluate the functionality. When NiFi is secure both authentication and authorization must be handled. This includes authentication and authorizations for S2S operations. An out-of-box installation of NiFi utilizes self -generated self-signed certificates to create the keystore and truststore files needed for mutualTLS. It also uses a very basic non production single-user-provider for user authentication and a single-user-authorizer for user/client authorization. These basic providers make it easy to evaluate NiFi, but are not robust enough to support all features. Is this what you are using still or have you created your own keystore and truststore files and setup non single user authentication and authorization providers? To be honest, I always setup production ready NiFi instance and clusters that don't use the auto-generated self-signed certificates and or single user providers. I can't say that I have tried using S2S in such out-of-box environment. So I can't say that the single-user-authorizer supports those needed authorizations. Above being said, I see you set nifi.remote.input.http.enabled=true, but all that property does is allow http transport protocol which means that means that the NiFi would support transferring FlowFiles over http protocol. That does not mean unsecured, it could be http or https depending on the destination URL. The S2S properties in the the NiFi properties need to be modified to support secure S2S by changing nifi.remote.http.secure=true (you did not comment if you made that change or not). 1. Is your S2SProvenanceReportingTask producing any bulletin messages? 2. Are you seeing any not authorized related log lines in the nifi-user.log? 3. What keystore and truststore did you configure in the StandardRestrictedSSLContextService controller service? I'll try to mess around with and out-of-box setup if that is what you are using to see if what you are trying to do is possible in such a non-production ready setup when I have some time. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more