Member since
07-30-2019
3387
Posts
1617
Kudos Received
999
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 206 | 11-05-2025 11:01 AM | |
| 414 | 10-20-2025 06:29 AM | |
| 554 | 10-10-2025 08:03 AM | |
| 376 | 10-08-2025 10:52 AM | |
| 419 | 10-08-2025 10:36 AM |
04-20-2017
03:43 PM
1 Kudo
@John T If you are using the listSFTP processor before your FetchSFTP processor , it will produce a zero byte flow flowfile for every FlowFile it finds on the target SFTP server. The listSFTP processor has a "File Filter Regex" where you can specify a java regular expression to limit what is returned to just files containing "file123.txt". For example "*file123.txt" The ListSFTP processor also maintains state so that the same files are not listed each time. so only new files containing file123.txt are listed each time it runs. The FetchSFTp processor is designed to return the content of a specific file and insert it as content to the FlowFile that he FetchSFTP processor is running against. Thanks, Matt
... View more
04-20-2017
03:18 PM
Possibility to run every second or minute. In reality this means run as often as possible using the allowable number of concurrent tasks during the 10th hour of each day. I your case it sounds like it was able to run at least 10 times in that one hour.
... View more
04-20-2017
03:04 PM
@Raphaël MARY If you are running a NiFi cluster, by default every node in your cluster will be running this getHDFS processor at 10 am each day. This means every node will be getting a copy of the same files and processing them in the same way. If you are running a cluster, considering changing the configuration of your getHDFS processor so it runs on primary node only.
... View more
04-20-2017
02:11 PM
@Rohit Ravishankar Back pressure thresholds are soft limits only. Backpressure is only applied once that threshold has been met or exceeded and remains in affect until the threshold falls below the configured value. Back pressure only affects the processor feeding the connection. processors downstream from where back pressure is being applied continue to run as scheduled.
... View more
04-11-2017
05:21 PM
1 Kudo
@Dmitro Vasilenko The ConsumeKafka_0_10 processor allows dynamic properties: You can try adding a new dynamic property to your ConsumeKafka_0_10 processor for "max.message.bytes" with a value of 2147483647 to see if that works for you. List of Kafka properties can be found here: http://kafka.apache.org/documentation.html#configuration Thanks, Matt
... View more
04-10-2017
03:29 PM
@Blake Colson If you found my initial response helpful in answering your question, please accept the answer.
... View more
04-10-2017
03:18 PM
2 Kudos
@Emily Sharpe The templates directory was left around to assist those users who are moving from NiFi 0.x to NiFi 1.x baseline. Since NiFi 0.x placed all generated templates in this directory, you can copy those templates over to the configured directory in NIFi 1.x and NiFi will load them in to the flow.xml.gz file for you on startup. There really is no other use for this property other then the above migration. Thanks,
Matt
... View more
04-10-2017
03:07 PM
@Blake Colson I am not complete clear on your response. Each NiFi instance has its own UI. In a NiFi cluster no matter which node's Ui you load, the canvas you will see is that of the cluster. Every Node in a NiFi cluster must run an identical dataflow. When you stand up your production NiFi instance or cluster, it will have its own UI and own canvas. You cannot manage your dev, qa, and prod clusters from the same UI. Once a dataflow exists on a canvas, you can start any portion or all the components (processors, controller services, input ports, output ports, etc..). There is no dependency that the UI/canvas remain ope in your browser in order for those dataflows to continue running. There is also no requirement that you template your entire dataflow each time. You can template only portions of a dataflow and move that portion between dev, qa, and prod. The SDLC model is not completely there yet in NiFi. Lets say you have template dataflow version 1 and now you want to deploy version 2. Going the template route would require you to bleed out all the data traversing the current version 1 that is running. I would upload version 2 of the template and add it to my canvas. Then stop any ingest processors in your version 1 flow already on the canvas. Allow the remaining processors to continue to run so that all data is eventually processed out of your version 1 flow. Start your version 2 dataflow so it starts ingesting all data at that point. Once the version 1 flow no longer has any data queued in it any longer, you can select its components and delete them from the canvas. Thanks, Matt
... View more
04-10-2017
02:26 PM
1 Kudo
@Michael Silas There is nothing specific in the users.xml or authorizations.xml file that is specific to any node. In Fact these files are checked on startup to make sure they are identical between all your NNiFi cluster nodes. In order for a node to successfully join a cluster the flow.xml.gz, users.xml, and authorizations.xml files must match. If you configure a new node to join an existing cluster and you have none of the above three files and have not configured the authorizers.xml file on the new node, the new node will inherit/download these three files from the cluster automatically. Even if you have not added the new node to the "proxy user requests" global access policy yet, you should be able to still connect to the UI of your other nodes and add it afterwards. Again, you are adding more for work for yourself by deleting the users.xml and authorizations.xml files. Thanks, Matt
... View more
04-10-2017
02:13 PM
2 Kudos
@Blake Colson The answer to that question lies in what version of NiFi you are running.... I will assume you are running the latest version in this response. (NiFI 1.2.0 or HDF 2.1.2 as of time this was written). You have two option for moving your entire dataflow from one system to another. 1. Copy the flow.xml.gz file from one NIFi instance to another. - This method requires that both NiFi instances use the same configured sensitive props key in their nifi.properties files. The sensitive props key is used to encrypt the sensitive properties (passwords) in the various components on your canvas. If they don't match, your new NiFi will not be able to load using the flow.xml.gz file you copied over. - Benefit of this method is you get your entire flow including all configured passwords. 2. Create a template of your entire dataflow, download it and then import it into you new NiFi. - Provide a name and description for you template - Once template is created, you will nee to download it: - Click the download icon to the right of your newly created template. - You now have a copy of your template to import in to you e new NiFi. - Once uploaded you can then add the template to your canvas by dragging the "Template" icon to your canvas: - Select your newly uploaded template form the selection list. - Templates when created are sanitized of any sensitive properties so they can be used in other Nifi instances. You will need to go to any processor that used a sensitive property and re-enter those sensitive values when using this method. Thank you, Matt
... View more