Member since
07-30-2019
3406
Posts
1623
Kudos Received
1008
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 337 | 12-17-2025 05:55 AM | |
| 398 | 12-15-2025 01:29 PM | |
| 406 | 12-15-2025 06:50 AM | |
| 371 | 12-05-2025 08:25 AM | |
| 604 | 12-03-2025 10:21 AM |
11-01-2024
08:26 AM
@HenriqueAX It is safe to restart the NiFi service without encountering any data loss. NiFi is designed to protect against dataloss relative to the FlowFile traversing connection between processor components added to the NiFi canvas. FlowFiles are persisted to disk (Content is store in content claims within the "content_repository" and metadata/attributes associated with a FlowFile is stored in the "flowfile_repository"). These repositories should be protected against loss through RAID storage or some other protected storage. When a processor is scheduled to execute it will begin processing of a FlowFile from an inbound connection. Only when a processor has completed execution is the FlowFile moved to one of the processors outbound relationships. If you were to shutdown NiFi or NiFi was to abruptly die, upon restart FlowFiles will be loaded in last known connection and execution on them will start over at that processor's execution. There exists opportunity within some race conditions that data duplication could occur (NiFi happens to die just after processing of FlowFile is complete, but before it is committed to downstream relationship resulting in FlowFile being reprocessed by that component). But this only matters where specific processor is writing out the content external to NiFi Or when NiFi is ingesting data in some scenarios (consuming from a topic and dies after consumption but before offset is written resulting in same messages consumed again). With a normal NiFi shutdown, NiFi has a configurable shutdown grace period. During that grace period NiFi no longer schedules and processors to execute new threads and NiFi waits up to that configured race period for existing running threads to complete before killing them. IMPORTANT: Keep in mind that each node in NiFi cluster executes the dataflows on the NiFi canvas against only the FlowFiles present on the individual node. one node has no knowledge of the FlowFiles on another node. NiFi also persists state (for those components that use local or cluster state) either in a local state directory or in zookeeper for cluster state. Even in a NiFi cluster some components will still use local state (example: ListFile). So protection of the local state directory via RAID storage of other means of protected storage is important. Loss of stare would not result in dataloss, but rather potential for a lot of data duplication through ingestion of same data again (depending on processors used). Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
10-31-2024
06:33 AM
1 Kudo
@drewski7 While you added the public cert for your NiFi8444 to the truststore used in the nifi8443 StandardRestrictedSSLContetService, did you do the same in reverse? Does your StandardRestrictedSSLContetService also include the keystore? The Keystore contains the PrivateKey that is used in the mutual TLS exchange with NiFi8444. NiFi8443's public cert (or complete trusts chain) needs to be added the truststore configured in the nifi.properties file on NiFi8444. You'll also want to look at the nifi-user.log on NiFi8444 to see the full exception thrown when NiFi8443 reporting tasks is trying to retrieve the Site-to-Site (S2S) details. Identities will be manipulated by matching identity mapping patterns setup in the nifi.properties file. So you'll want to verify that also. Additionally, are you still using Single-User-provider on NiFI8444 along with the NiFi auto generated keystore and truststore? (I saw CN=localhost in one of your images). You should create a keystore and truststore with proper DN and SANs for use with S2S. Hope this helps with your investigation and troubleshooting. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
10-31-2024
06:07 AM
2 Kudos
@SS_Jin Another option is to use the NiFi expression Language (NEL) function "literal()}" in the NEL statement: ${myattr:append(${literal('$$$')}):prepend(${literal('$$$')})} This removes the need to you are using the correct number of "$" to escape $ in the NEL. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
10-28-2024
06:21 AM
1 Kudo
@pankajgaikwad while it is wonderful that you have shared your InvokeHTTP processor configurations, I don't think enough information has been provided to provide assistance here. All we know from you post is that some Post rest-api call was made against some service endpoint using some URL, to send form data of some content-type to which an illegal state exception was thrown. Sharing details about your use case is always helpful. What is the target endpoint service (Polarion? which community members may not be familiar with)? What is the full rest-api call you are trying to make? Were you able to successfully make that same rest-api call via curl local to the NiFi server? What is the structure of your NiFi FlowFile (what is in the FlowFile's content and what are the FlowFile's attributes when the FlowFile reaches the InvokeHTTOP processor)? What are the complete configurations of the invokeHTTP processor? (Some property values are cut-off in your images) What documentation are you following for this rest-api call? As far as the exception goes... Was the "java.lang.IllegalStateException: closed" accompanied by any stack trace in the the nifi-app.log? What was logged within the target service when this post request was made? I see you shared your full dataflow in another community post: An example of the original file you are obtaining via "GetFile", what attributes you are adding to the FlowFile, and how you are modifying that content before the InvokeHTTP may also be helpful here. Sharing the additional input and details may make it possible for someone in the community to provide you with some suggestions and solutions. Thank you, Matt
... View more
10-24-2024
05:09 AM
1 Kudo
@HiAnil HDF 3.5 release is based off Apache NiFi 1.12 and was released more then 5 years ago. It was End-Of-Life as of April 2023. NiFi-Registry service in HDF 3.5.2 only lists PostgreSQL 9.5+, 10.x, and 11.x as tested versions. I can tell you that HDF 3.5.2 has never been tested or verified against Postgres 14 or 15 and suspect there could likely be incompatibility issues. I would suggest testing this your self before upgrading in any production environment. Keep in mind that using such an old release exposes you to CVEs addressed in the many releases put out since HDF 3.5.2. Additional the product has had many improvements and new features added over the years. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
10-23-2024
09:50 AM
@salahevops Upgrading to Apache NiFi 1.21 or newer should resolve you issue. The latest Apache NiFi 1.x branch release is 1.27. Apache NiFi 2.x branch is still in it developmental milestone release cycle (currently at 2.0.0-M4). There was a vote put forth in Apache NiFi to release the first official 2.0 release. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
10-23-2024
08:01 AM
@salahevops I suspect you are not running Apache NiFi older then release 1.21? If so, you may be encountering this issue addressed through an improvement: https://issues.apache.org/jira/browse/NIFI-4890 Azure AD lets a lifetime on the client issued token. That is likely 30 minutes. The token can be refreshed, but NiFi OIDC in older version does not have the ability to do the background refresh. Further improvements where added in NiFi 2.0 to add the refresh configuration timer: https://issues.apache.org/jira/browse/NIFI-12135 Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
10-23-2024
06:40 AM
1 Kudo
@HenriqueAX The NiFi keystore contains a private key certificate. The NiFi Truststore contains trusted cert entries (public certificates). You should combine all the truststores to make one truststore containing all the public certificates and use that same truststore on all the NiFi nodes and NiFi-Registry host. It may also help to understand what is happening by looking at the output from openssl: openssl s_client -connect <nifi hostname>:<nifi port> -showcerts
openssl s_client -connect <nifi-registry hostname>:<nifi-registry port> -showcerts Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
10-23-2024
06:33 AM
@AndreyDE Post your EvaluateXPath processor you have a FlowFile that now has a FlowFile Attribute "/grn" with a value of "3214600023849". In ReplaceText, it appears you intent is to replace the entire content of the FlowFile with the value returned by the NiFi Expression Language (NEL) statement: ${grn:escapeCsv()}; Your expression language statement grabs the value from FlowFile Attribute "grn", passes it the escapeCsv NEL function and then appends a ";" to the returned result. Problem 1 is your FlowFile has no attribute "grn", it has an attribute "/grn" Since "/grn" contains special character "/", it will need to be quoted in the NEL statement as follows: ${"/grn":escapeCsv()}; reference: Structure of a NiFi Expression Above would output content with: 3214600023849; This content would not require being surrounded by quotes under RFC 4180 reference: escapeCSV function Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
10-23-2024
06:02 AM
@vg27 If you have a support contract with Cloudera, you could open a support case where someone could connect directly with you and assist you through your many issues. ------ 1. As i have shared before, the Single-User providers are not designed with the intent of use in a NiFi clustered environment. They should only be used for standalone NiFi evaluation purposes. Once you start to get in to the more involved cluster based deployments, you need to use different providers for authentication and authorization. When using the single-user-provider for authentication, each node can create different credentials which will not work in a cluster environment. For login based authentication, you should be using LDAP/AD (ldap-provider) or Kerberos (Kerberos-provider). For authorization, you should be using the managed authorizer. ------ 2. Are you still using your own generated keystore and truststore with your own created private and public certificates? Using the NiFi auto-generated keystore and truststore will also not support clustering well as each node will not have a common certificate authority. ----- 3. The "org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss" exception is an issue with with Zookeeper (ZK) Quorum. This error can happen if both you nodes are not fully up at time of exception and may also happen because you do not have proper quorum with your ZK. Quorum consists of and odd number of ZK hosts with min 3. Strongly encourage the use of an external ZK since anytime one of your nodes goes down, you'll lose access to both nodes. ----- 4. You are using an external https Load Balancer (LB) which means that sticky sessions (session affinity) must be setup since the user token issues when you login is only valid for use with the node that issued it. So if your LB directs you to node 1 that presents you with login UI, you enter credentials obtaining a user token from node 1, and your LB then redirects to node 2 to load UI, it will fail authentication on node 2 because the request includes the token only good for node 1. ----- 5. I see you are using a mix of hostnames and IP addresses in your NiFi configurations, so make sure that the node certificates include both as SAN entries to avoid issues. ----- Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more