Member since
07-30-2019
3421
Posts
1624
Kudos Received
1010
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 35 | 01-13-2026 11:14 AM | |
| 171 | 01-09-2026 06:58 AM | |
| 498 | 12-17-2025 05:55 AM | |
| 559 | 12-15-2025 01:29 PM | |
| 558 | 12-15-2025 06:50 AM |
03-25-2024
06:47 AM
@saquibsk A couple thoughts come to mind here... Have you looked at maybe using a GenerateTableFetch processor in "Dim 2" which can be triggered by an incoming FlowFile. This processor will take an optional inbound connection as a trigger. Other option might be to use an invokeHTTP processor after your PutDatabaseRecord processor to start the the "Dim 2" QueryDatabaseTable processor via the NiFi rest-api (REST API). You could then do similar after DiM 2 QueryDatabaseTable processor to stop the processor again. If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
03-22-2024
06:21 AM
1 Kudo
@Chetankc From a NiFi perspective there is not much guidance that can be given with such little information. What does "10 Billion Load" mean? Is it the number if unique files being ingested to NiFi? What is size average? What is rate of ingest? What is "15,000 process"? Is this the number of NiFi processors added to the NiFi canvas? What types of processors are being used? Does your dataflow(s) do a lot of content modification? Have you done testing on throughput performance and done any performance tuning? 15,000 processors is a lot of execution scheduling against your CPU cores. In your load testing what was you CPU load average? What was your memory impact? You also have custom NiFi components. Are you referring to these custom components as using many threads or the totality of the 15,000 components using a lot of threads? What does a lot of threads mean here? Are any of these long running threads or are they all millisecond thread executions? What kind of performance and throughput are you achieving now? and onn what type of setup (how many nodes in your NiFi cluster, number of CPU cores, JVM Heap settings, type of disk, etc) currently? Thank you, Matt
... View more
03-22-2024
06:08 AM
1 Kudo
@hidden Welcome to the world of Apache NiFi. The first recommendation I'd make is to download the latest version of Apache NiFi 1.x branch. The 1.12 branch is more then 5 years old now and there have been so many improvements, bug fixes, and security updates since its release. The new Apache NiFi 2.x branch has also been released recently. Since you are new to NiFi you may also consider utilizing the 2.x version instead to avoid hassle down the road of migrating to this new major release branch. The 1.x branch will cease to release new versions soon. When sharing exceptions for help is is best to make sure you have also inspected the NiFi-Registry logs produced in the log directory you have configured in the logback.xml file. They may provide more detailed stack traces and/or logging to help fully understand the issue you encountered. Thank you, Matt
... View more
03-21-2024
12:57 PM
1 Kudo
@2ir NiFi can consume heap (native) and non-heap (non native) memory. This commonly happens with processor that create jetty servers for listening for inbound requests, scripting processors that execute child scripts, processors that execute OS commands, NiFi bootstrap parent process, etc... So XMX heap memory is only part of the memory that can be used by a Java application. From within the NiFi UI --> global menu --> cluster or summary, you can see that actual amount of heap utilized using (JVM tab cluster UI or summary system diagnostics UI). I would advise against setting your Heap so high when you have 47 GB total memory. It is likely that your OS if linux based is going to invoke OOM killer to kill the NiFi process to protect the OS. I'd advise reducing your heap usage to XMS and XMX of 24 GB. Re-evaluating your dataflows for high memory use processor and making sure they are optimally configured is next steps. The embedded documentation for the the processor components will each have a "System Resource Considerations" section that will tell you if the processor has the potential to use high memory or high cpu. For processors with potential for high heap usage, be careful with concurrent task configuration. Default on current tasks is always 1. Increasing it is like adding a second copy of the processor allowing multiple concurrent executions thus increasing the heap usage. (Example: ReplaceText 1.25.0) You'll want to be careful using templates (deprecated) as any templates generated and held in NiFI consume heap. FlowFile metadata is held in heap so avoid creating FlowFile with large attributes (like extracting content to attributes). Use Record base processor whenever possible to reduce numbers of individual FlowFiles. Use a NiFi cluster instead of standalone NiFi to spread FlowFile load across multiple NiFi instances. Monitor heap usage and and collect heap dumps to analyze what is consuming the heap. Hope this helps you with your investigative journey. If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
03-20-2024
12:52 PM
1 Kudo
@darkcoffeelake NiFi out-of-the-box setup generates simply keystore and truststore automatically and set the login provider to single-user-provider and authorizer to single-user-authorizer. This out-of-the-box setup is simplifies secured access for evaluation of NiFi. It is not a production ready setup in that it does not support multi-user authentication, granular access controls, or NiFi cluster setups. There are bunch of steps that go into securing Apache NiFi for production ready environments. Securing NiFi not only sets up NiFi over an HTTPS connection, but also requires that user authentication and authorization is setup. NiFi will require a keystore and truststore which youcan create yourself or use publicly available service to create them for you (example would be tinycert). The keystore created for you NiFi must meet the following requirements for NiFi: Contains only 1 PrivateKey entry. Does not use wildcards in the DN of PrivateKey certificate. Has both clientAuth and serverAuth Extended key Usage (EKU) Has SubjectAlternativeNames (SAN) entry(s) matching NiFi hostname and any other name that may be used to access the NiFi. The truststore needs to contain the complete trust chain for your NiFi keystore certficate. A certifcate might be self signed (meaning both issuer and signer are same DN), it may be signed by an intermediate CA, or rootCA. If signed by an intermeidiate CA, your truststore would need to have the trustedCertEntry (public key) for the intermediate CA (intermediate CA is any CA where signer and issuer are different DNs) and the trusted certEntry for that signer and so until you reach the root CA in the chain (root CA will have same signer and issuer DN). Once you have your certificates, you'll need to decide how your users are going to authenticate with NiFi. NiFi does not have a embedded provider that supports multi-user authentication. Here is what is available to choose from: User Authentication LDAP and Kerberos are probably the most commonly used. Once you have decided how you are going to authenticate your users, you'll need to setup authorization for those users. here are your options here: Multi-Tenant Authorization The simplest authorizers.xml setup would utilize the StandardManagedAuthorizer, FileAccessPolicyProvider, and FileUserGroupProvider. a sample configuration can be seen here: https://nifi.apache.org/documentation/nifi-2.0.0-M1/html/administration-guide.html#file-based-ldap-authentication If setup correctly, on first startup, the above authorizers.xml will generate and seed the users.xml and authorizations.xml file so that your initial admin user (a ldap user or kerberos user for example) with the necessary authorization policies to access the NiFi UI. From the NiFi UI, that initial admin user can setup additional user identity authorizations. If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
03-20-2024
12:14 PM
@Vas The straight out of the box generated keystore and truststore will not have "nifi.local" as a SAN entry. You could generate your own keystore and truststore with needed SAN entry(s). If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
03-20-2024
11:55 AM
@manishg The FetchFile processor fetches the content of the target file and inserts its content into the FlowFile that triggered the Fetch. That FlowFiles is then passed on to the next processor. So whatever is set as the "filename" attribute on the FlowFile will remain the filename even after fetching the content. Without the specific on the rest of your dataflows configuration, it is hard to provide any additional input on your issue. My guess here is that the putSFTP processor is writing the content of the fetched file to the target SFTP server location; however, its filename is not that of the fetched file. If this is the case, you have a datafow design issue and need to check your configurations to make sure the filename attribute is being set accordingly. If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
03-15-2024
09:54 AM
1 Kudo
@Chaitanya_Y I am not sure why the Apache NiFi Community did not release and migration guidance with Apache NiFi 1.25 release. However, there are release notes that highlight notable changes: https://cwiki.apache.org/confluence/display/NIFI/Release+Notes#ReleaseNotes-Version1.25.0 You will want to read through all the migration guidance for every release between 1.16 and 1.25 to see if anything applies to your specific setup or dataflows. Take not of any deprecated components you may be using currently and any components that were remove form the default release (removed does not mean gone, you can download those removed nars from central repository and add them to the 1.25.0. release if needed. If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
03-15-2024
06:57 AM
2 Kudos
@Chaitanya_Y Apache NiFi 1.16.0 is 2 years old at this point and there have been many bug fixes since that time specific to parameter contexts. I recommend upgrading to the latest Apache NiFi 1.25.0 release. Some of the issues fixed are related to problems you have shared. Parameter Context fixes since Apache NiFi 1.16.0 release: https://issues.apache.org/jira/browse/NIFI-10096?jql=project%20in%20(NIFI%2C%20NIFIREG)%20AND%20fixVersion%20in%20(1.16.1%2C%201.16.2%2C%201.16.3%2C%201.17.0%2C%201.18.0%2C%201.19.0%2C%201.19.1%2C%201.20.0%2C%201.21.0%2C%201.22.0%2C%201.23.0%2C%201.23.1%2C%201.23.2%2C%201.24.0%2C%201.25.0%2C%201.26.0)%20AND%20text%20~%20%22%5C%22parameter%20context%5C%22%22%20ORDER%20BY%20created%20DESC%2C%20priority%20DESC%2C%20updated%20DESC If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
03-12-2024
06:22 AM
2 Kudos
@broobalaji HDF 1.8.0.3.3.1.0-10 was released way back in 2017. I strongly recommend upgrading to a much newer release of CFM. NiFi Templates have been deprecated and are completely removed as of Apache NiFi 2.x releases. Apache NiFi deprecated templates for a number of reasons: 1. Templates uploaded to NiFi (even if not instantiated/imported to the NiFi canvas reside within NiFi's heap memory space) 2. Large uploaded templates or many uploaded templates can have a substantial impact on NiFi performance because of the amount of heap they can consume. Simply increasing the size of NiFi's heap is also not the best solution to that heap usage as large heaps just lend themselves to longer stop-the-world garbage collections with the JVM. 3. Apache NiFi deprecated and moved away from using xml based flow in favor of json flow definitions around the Apache NiFi 1.16 time frame. Flow definitions (JSON files) can exported and imported without uploading them in to heap memory within NiFi. The above info aside.... It is best to use the developer tools available in your web browser to inspect/capture the rest-api call being made when you perform the same steps directly via the NiFi UI. This makes it easy to understand the calls that need to be made in your automation. I also encourage you if you continue to use templates to upload, import to UI, and then delete the uploaded template to minimize heap impact. Thanks, Matt
... View more