Member since
07-30-2019
3421
Posts
1624
Kudos Received
1010
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 44 | 01-13-2026 11:14 AM | |
| 181 | 01-09-2026 06:58 AM | |
| 504 | 12-17-2025 05:55 AM | |
| 565 | 12-15-2025 01:29 PM | |
| 561 | 12-15-2025 06:50 AM |
04-02-2024
01:22 PM
@edim2525 The entire datflow(s) reside in NiFi heap memory and are persisted to disk in the flow.json.gz file. This includes the current set state for each component. Every time a change is made to the NiFi canvas, the current persisted flow.json.gz is moved to archive and a new flow.json.gz is written. When NiFi is started it will load the flow.json.gz in to NiFi heap memory and set the state to the state for each component recorded in the the flow.json.gz that is loaded. Only time this is not true is when the nifi.properties property "nifi.flowcontroller.autoResumeState" has been set to false. When set to false, all "RUNNING" components set in the flow.josn.gz will load into heap as "STOPPED". NiFi will then archive the current flow.json.gz and write a new flow.json.gz with those new "STOPPED" states. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
04-02-2024
06:16 AM
@EddyChan The out-of-box Apache NiFi self-signed certificate generation was added to make it easy for first time users to experiment with a secure NiFi instance. Just like the Single user authentication and and single user authorizer, these were not intended to be used for long term or production use cases. There is no configuration option to extend the lifetime. For long term use or production setups, you should be generating your own signed certificates for use in your NiFi (preferable signed by a trusted authority rather then being self-signed). You could use the NiFi TLS toolkit still available in the Apache NiFi 1.x releases to generate your own certificates for keystore and truststore. You could generate your own following guidelines for NiFi certificates: Security Configuration You could use a free online service to generate certificates. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
04-01-2024
07:19 AM
2 Kudos
@s198 In your use case you could probably handle this without needing to have a sub process group usng flowfile concurrency since the GetHDFSFileInfo processor sets a number of attributes in the FlowFiles it produces that you can use to correlate the various FlowFiles to one another. Since you have already written out all your individual HDFS files to your SFTP processor, you could remove all content on the FlowFiles using ModifyBytes (no sense in waiting CPU to merge content if you don't need to keep it anymore), so all you have are zero byte FlowFiles with attribute data. Then feed that stream of zero byte FlowFiles to a MergeContent processor. I would configure your Merge Content as below: Assume defaults for any property not mentioned here Correlation Attribute Name = hdfs.path min num entries = <set to a value higher then you would expect to listed in any HDFS directory> max num entries = <set to value larger the above> max bin age = <set to value high enough to allow all files to reach this processor before this time expires> What above will do is start placing all FlowFiles that have the same value in the FlowFile's hdfs.path flowfile attribute in the same MergeContent bin. The min num entries will prevent the MergeContent processor from merging the bin until this value is reached or until the max bin age expires. The bin age starts as soon as first FlowFile is allocated to the bin. So basically since we don't know how many files might be in any given HDFS directory, we are controlling merge via bin age instead of number of FlowFiles. This builds some delay in your dataflow, but results in 1 FlowFile output for each HDFS directory listed out and fetched. You can then take that 1 zero byte Merged FlowFile and use it to complete your single FlowFile downstream processing of job completion. While above would work in ideal conditions, you should always design with failure possibility in mind. I would still recommend placing every component from "GetHDFSFileInfo --> RouteOnAttribute --> UpdateAttribute --> FetchHDFS --> PutSFTP -->ModifyBytes --> UpdateAttribute" inside a process group configured with "FlowFile Concurrency = Single FlowFile Per Node" and "Outbound Policy = Batch Output". This would allow you to make sure that all Fetched FlowFiles are successfully processed (written to SFTP server) before any were out put form the process group to the ModifyBytes and MergeContent processors. You never know when a issue may prevent of slow writing to SFTP server. This allows you to more easily handle those failures and assure any retries or error handled before exiting that PG and completing you job. This also allows you to set a much shorter max bin age in your MergeContent processor since before any FlowFile in that batch are released they will all have been processed, so the will all reach MergeContent at same time. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
04-01-2024
06:48 AM
1 Kudo
@ALWOSABY This looks related to the driver version you may be using in the processor. Perhaps trying a different driver version may resolve you issue. Perhaps try ojdbc6 version 11.1.0.7.0? Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
03-29-2024
10:23 AM
1 Kudo
@jame1997 Not much to look at from a NiFi side. NiFi is writing to the network successfully and there is some loss then happening between NiFi and your syslog server. Resource usage affecting your NiFi would only slow down processing but not result in dataloss within NiFi. So the PutSyslog would successfully write all bytes to the network before passing the FlowFile to the "success" relationship. Using TCP of course would allow NiFi to confirm successful delivery thus allowing NiFi to appropriately retry, or route to either failure or success relationships. You could look at the data rate NiFi is writing from the putSyslog by looking at the stats on the processor. Then maybe you could experiment with: 1. netstat - nsu to check for UDP packet loss. 2. using a network monitoring tool Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
03-29-2024
07:13 AM
1 Kudo
@jame1997 My first question would be how you have your putSyslog processor configured? Are you using TCP or UDP. If you are using UDP there is not going to be any confirmed delivery. That is not a lossless protocol. While TCP does have confirmed delivery at the expense of speed. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
03-29-2024
07:06 AM
1 Kudo
@DeepakDonde https://issues.apache.org/jira/browse/NIFI-12513 does not mention GetHTTP processor, so you could certainly try that processor to see if you experience the same issue. Downgrade would lose all improvements and bug fixes introduced in Apache NiFi 1.25. Otherwise you could wait until 1.26 is released that contains the fix. The InvokeHTTP processor is part of the NiFi Standard nar which includes a lot of NiFi components. You could also try Downloading just the 1.24.0 standard nar from the maven central repository and adding it the extensions folder of your 1.25.0 NiFi. This would make both the 1.24 and 1.25 versions of many components available in your NiFi. You could then use the 1.24 version of the invokeHTTP over the 1.25 version that has issue. This would allow you to continue to use 1.25 version for all other components. While i have added multiple version of the same nar to my NiFi installations in the past, I have not done so with the standard nar. If you have issues, you can stop your NiFi, remove the added nar and restart so thing go back to previous. https://mvnrepository.com/artifact/org.apache.nifi/nifi-standard-shared-nar/1.24.0 https://repo1.maven.org/maven2/org/apache/nifi/nifi-standard-shared-nar/1.24.0/nifi-standard-shared-nar-1.24.0.nar Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
03-29-2024
06:51 AM
@s198 NiFi has no ability to merge files remotely. NiFi would need to consume all the files (ListHDFS --> FetchHDFS), then merge the content of those FlowFiles (MergeContent or MergeRecord), then use UpdateAttribute to set desired filename on merged file, and finally write the merged file back to HDFS using PutHDFS processor. If you are using a NiFi Cluster, you would need to do all this merging on one node of the cluster, NiFi nodes can only execute against the FlowFiles present on that one specific node. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
03-28-2024
10:26 AM
3 Kudos
@C1082 That ERROR has nothing to do with the community question you asked about: The error while accessing Nifi UI.
javax.net.ssl.SSLException: Connection reset Fixing the ERROR logged by the controller service DBCPConnectionPool shared in yoru last post will not resolve your UI access issue. Are you still having issue accessing the NiFi UI? If not, try searching for DBCPConnectionPool that is throwing this exception and verify its configuration and driver the user has configured it to use. You can find this specific NiFi controller Service by searching on it unique assigned ID: "8c23244e-6b42-38c5-aaf2-effc40ab1d4b". You'll want to make sure the driver still exists and the configured location and is owned and accessible by the NiFi service user. Sharing the exact SQL DB version and currently used database driver being used would also help here. Was this Controller Service working before the AKS version upgrade? Please help our community continue to thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
03-28-2024
10:10 AM
1 Kudo
@DeepakDonde The issue you are describing was caused by a change in Apache NiFi InvokeHTTP processor that tries to URL encode the URL entered. https://issues.apache.org/jira/browse/NIFI-12513 The fix for this is in https://issues.apache.org/jira/browse/NIFI-12785 which will be part of the Apache NiFi 1.26 and Apache NiFi 2.0.0-M3 releases. Since the change that caused this issue was added to Apache NiFi 1.25 and Apache NiFi 2.0.0-M2, you could use and earlier version like Apache NiFi 1.24 or Apache NiFi 2.0.0-M1 to get around the issue until the two above mentioned versions are released. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more