Member since
07-30-2019
3399
Posts
1621
Kudos Received
1001
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 492 | 11-05-2025 11:01 AM | |
| 377 | 11-05-2025 08:01 AM | |
| 614 | 11-04-2025 10:16 AM | |
| 750 | 10-20-2025 06:29 AM | |
| 890 | 10-10-2025 08:03 AM |
10-23-2024
06:40 AM
1 Kudo
@HenriqueAX The NiFi keystore contains a private key certificate. The NiFi Truststore contains trusted cert entries (public certificates). You should combine all the truststores to make one truststore containing all the public certificates and use that same truststore on all the NiFi nodes and NiFi-Registry host. It may also help to understand what is happening by looking at the output from openssl: openssl s_client -connect <nifi hostname>:<nifi port> -showcerts
openssl s_client -connect <nifi-registry hostname>:<nifi-registry port> -showcerts Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
10-23-2024
06:33 AM
@AndreyDE Post your EvaluateXPath processor you have a FlowFile that now has a FlowFile Attribute "/grn" with a value of "3214600023849". In ReplaceText, it appears you intent is to replace the entire content of the FlowFile with the value returned by the NiFi Expression Language (NEL) statement: ${grn:escapeCsv()}; Your expression language statement grabs the value from FlowFile Attribute "grn", passes it the escapeCsv NEL function and then appends a ";" to the returned result. Problem 1 is your FlowFile has no attribute "grn", it has an attribute "/grn" Since "/grn" contains special character "/", it will need to be quoted in the NEL statement as follows: ${"/grn":escapeCsv()}; reference: Structure of a NiFi Expression Above would output content with: 3214600023849; This content would not require being surrounded by quotes under RFC 4180 reference: escapeCSV function Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
10-23-2024
06:02 AM
@vg27 If you have a support contract with Cloudera, you could open a support case where someone could connect directly with you and assist you through your many issues. ------ 1. As i have shared before, the Single-User providers are not designed with the intent of use in a NiFi clustered environment. They should only be used for standalone NiFi evaluation purposes. Once you start to get in to the more involved cluster based deployments, you need to use different providers for authentication and authorization. When using the single-user-provider for authentication, each node can create different credentials which will not work in a cluster environment. For login based authentication, you should be using LDAP/AD (ldap-provider) or Kerberos (Kerberos-provider). For authorization, you should be using the managed authorizer. ------ 2. Are you still using your own generated keystore and truststore with your own created private and public certificates? Using the NiFi auto-generated keystore and truststore will also not support clustering well as each node will not have a common certificate authority. ----- 3. The "org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss" exception is an issue with with Zookeeper (ZK) Quorum. This error can happen if both you nodes are not fully up at time of exception and may also happen because you do not have proper quorum with your ZK. Quorum consists of and odd number of ZK hosts with min 3. Strongly encourage the use of an external ZK since anytime one of your nodes goes down, you'll lose access to both nodes. ----- 4. You are using an external https Load Balancer (LB) which means that sticky sessions (session affinity) must be setup since the user token issues when you login is only valid for use with the node that issued it. So if your LB directs you to node 1 that presents you with login UI, you enter credentials obtaining a user token from node 1, and your LB then redirects to node 2 to load UI, it will fail authentication on node 2 because the request includes the token only good for node 1. ----- 5. I see you are using a mix of hostnames and IP addresses in your NiFi configurations, so make sure that the node certificates include both as SAN entries to avoid issues. ----- Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
10-22-2024
01:36 PM
1 Kudo
@edim2525 NiFi needs access to a lot of file handles since your dataflow can consist of a lot of components with multiples of concurrency plus you can have a lot of individual FlowFiles traversing your dataflows. The typical default open file limit is 10,000. I'd recommend setting a much larger open file limit of 100,000 to 999,999. This will solve your Too many open files error. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
10-21-2024
12:42 PM
@nifier Your putFile issue is unrelated to original query in this community question. It is better if you start a new community questioon for unrelated queries as solutions can become confusing to others who may use the thread in the future. That being said, this exception is cause because your NiFi FlowFile has a filename that contains a directory structure: 20242323/year/year.txt This is not a valid filename to use with putFile processor. I am not sure where in your dataflow before putFile that the filename FlowFile Attribute is being modified in such a way. You might be able to address this issue there (preferred). You could use an update Attribute processor to extract the directory structure from the filename before putFile processor also. if you want to maintain the append the extracted path from the filename to "Directory" configured in the putFile processor if you want to create that directory structure. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
10-21-2024
06:18 AM
@vg27 1. So i understand that you have created client certificates for your user. What authority was used to sign these user certificates? Was this authority added to the NiFi configured truststore? When you open a browser to NiFi's url, NiFi will respond with a WANT for a clientAuth certificate along with a list of trusted authorities from its truststore. If your certificate loaded in your browser is not signed by one of those authorities it will not be presented to NiFi. If no clientAuth certificate is presented, NiFi will move on to another configured method of user authentication. The fact that you are seeing the NiFi login UI, tells me the TLS exchange did not result in a clientAuth certificate being presented by yoru browser. With certificate based mutual Auth there is no login required. 3. "nifi.security.user.login.identity.provider=singleUser" is not a valid configuration. I assume you meant "nifi.security.user.login.identity.provider=single-user-provider. With "Single-user-provider" configured, the only username and password accepted would be for the single user credentials Nifi auto-generated and output to the logs the first time NiFi was started with that provider configured. If you have no intention of using the single-user-provider, just leave "nifi.security.user.login.identity.provider=" unset. 4. you don't need to worry about sticky sessions if you are only using certificate based authentication, since your client certificate would be passed in every request and their are no tokens involved like in login based providers. If you did decide to use a login-provider like LDAP or Kerberos later, sticky sessions would need to be setup first or you may never be able to access the UI. Once you enter the username and password, the next request goes is to access UI using that token and if the load balancer were to redirect that to a different node, the UI would not load but instead throw and exception about the unknown user. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
10-21-2024
05:52 AM
@jame1997 You can version control Process Groups (PG) into NiFi-Registry. If a process group were to get deleted, you could reload the last version stored in NiFi-Registry. With version controlled PGs, anytime a user makes a change to the PG, the PG reports local change exists and allows a quick option to commit new version. Not only does this version control feature allow you to restore last good stored version from NiFi-Registry, it makes it easy to back out changes to an older stored version. This would require you to version control the individual PG and not the top level PG to create a NiFi-Registry catalog of all your PGs to facilitate easy rollback, restore, and reuse capabilities. If you are talking about a scenario where you accidentally deleted a PG and noticed right away, you can simply swap in the newest archived flow.json.gz and restart your NiFi to restore. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
10-21-2024
05:41 AM
@Tanya19 @MaxEcueda The PutIceberg and PutIcebergCDC processors only offer Hadoop or Hive Catalog Service provider options currently. The only mention of Glue Catalog i could find in an Apache NiFi JIra was the following still open Jira: https://issues.apache.org/jira/browse/NIFI-11449 It might be a good idea to create an Apache NiFi jira with as much detail as you can provide around this improvement request for an additional AWS Glue Catalog provider. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
10-18-2024
08:09 AM
@jame1997 1. Everything that is exists on the canvas resides in heap and is persisted to the flow.json.gz file written to the NiFi conf directory (default). With every change made within the NiFi UI, the current flow.json.gz if moved to an archive directory and new flow.json.gz is generated. All Nodes in a NiFi cluster use the same flow.json.gz. You don;t need to stop NiFi to make a copy of the flow.json.gz file. The NiFi flow.json.gz will contain encrypted values for all sensitive properties entered in any component on the NiFi UI canvas. In order for a NiFi instance to load a flow.json.gz, it must be cnfigured with the same "nifi.senstive.props.key" password used by the NiFi where the flow.json originated. So always a good idea to backup your NiFi's conf directory (especially the nifi.properties file). Keep in mind that all nodes in the NiFi Cluster must have same sensitive props key password configured, so unless you lose your entire NiFi cluster, you can get the flow.json.gz an sensitive.props.key form any nodes still accessible. 2. I am not crystal clear on what "backup all the processes" means. Are you referring to all the dataflows built on the NiFi canvas? Some components (processors, controller services, reporting tasks) may have dependencies on other things like local/cluster state, local files, etc. So those items need to be taken into consideration with any backup planning. Each node in a NiFi cluster has a local state directory which hold state for components that use local state (listFile for example). For other components, cluster state is used (cluster state is written to Zookeeper and Zookeeper quorum maintains protection of that information). State is always changing as your dataflow is running, so it is not something to easily backup specifically for local state that has not redundancy across NiFi nodes. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
10-18-2024
07:51 AM
@Kiranq This error shared: 2024-10-17 08:35:19,764 ERROR [Timer-Driven Process Thread-5] o.a.n.c.s.StandardControllerServiceNode StandardControllerServiceNode[service=CSVRecordLookupService[id=a8b84b00-b0ee-31c8-dbda-7e7e9795ba4b], name=CSVRecordLookupService, active=true] Encountering difficulty enabling. (Validation State is INVALID: ['CSV File' is invalid because CSV File is required, 'Lookup Key Column' is invalid because Lookup Key Column is required]). Will continue trying to enable. Indicates that NiFi is trying to enable a NiFi Controller services loaded from the flow.json.gz during startup, but cannot because it's configuration is invalid. It is complaining about the configuration of the "CSV File" and "Lookup Key Column" properties. Have you tried starting your NiFi with the following setting in your nifi.properties file set to "false": nifi.flowcontroller.autoResumeState=false This will start NiFi and all components on the canvas will not be started during startup. Also if you NiFi is at the point it is trying to enable components on the canvas, Your NiFi is up and running. As far as the screenshot error, have you verified ownership and permissions on that directory path. Permissions can be an issue if you started the NiFI service as different users at some point in time resulting in some files created on startup having different ownership. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more