Member since
07-30-2019
3392
Posts
1618
Kudos Received
1001
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 417 | 11-05-2025 11:01 AM | |
| 308 | 11-05-2025 08:01 AM | |
| 447 | 11-04-2025 10:16 AM | |
| 666 | 10-20-2025 06:29 AM | |
| 806 | 10-10-2025 08:03 AM |
03-07-2017
08:00 PM
You need to keep the NIFi copy of the data even after writing a copy of it out via the putSFTP? If you need to retain a local copy of the data, route success twice form your putSFTP processor. So you should be able to do this with..... The UpdateAttribute processor can be used to update the filename by adding the following new property to its configuration: The Local copy of your Files remain unchanged down the success relationship to the left. The copy sent down the path to the right will have its content cleared, filename changed, and then sent via another PutSFTP. Thanks, Matt
... View more
03-07-2017
05:55 PM
Thanks Much.
... View more
03-07-2017
06:26 PM
I accepted this because your solution was in a comment down below, for future reference for others. Using the colon in the filename was the problem.
... View more
01-16-2018
05:07 PM
@Eric Lloyd With the above configuration, it would only take 1 FlowFile to be assigned to a bin before that bin was marked eligible for merging. There is nothing there that force the processor to wait for other FlowFiles to be allocated to a bin before merge, Both minimums are set to 1 FlowFile and 0 Bytes. In order to actually get 100,000 Flowfiles (this is high and may trigger OOM), there would need to be 100,000 Flowfiles all with the same correlation attribute value in the incoming connection queue at the time the processor runs. This is almost certainly not going to be the case. The Max bin age simply sets an exist strategy here. It will merge a bin regardless if minimums have been met if the bin age has reached this value. You may want to set more reasonable values for your mins and also consider using multiple mergeContent processors in series to step up to the final merged number you are looking for. Thanks, Matt
... View more
03-07-2017
08:49 PM
Hi @Matt Clarke. Perfect, thank you
... View more
02-12-2019
11:43 AM
@Joe P did you set up https i.e. did you enable SSL on the server?
... View more
05-05-2017
12:22 PM
@Ayaskant Das Just wondering if the above was able to resolve your issue. The nifi-user.log screenshot you provided clearly shows that you have reached NiFi and successfully authenticated with the above user DN; however, the user is not authorized to access the nifi /flow resource. Thank you, Matt In order for users to get notifications of a comment to a post you must tag them in teh response using the @<username> (example: @Matt Clarke )
... View more
03-02-2017
08:23 PM
1 Kudo
@nedox nedox You will want to use one of the available HDFS processors to get data form your HDP HDFS file system.
1. GetHDFS <-- Use if standalone NiFi installation
2. ListHDFS --> RPG --> FetchHDFS <-- Use if NiFI cluster installation
All of the HDFS based NiFi processors have a property that allows you to specify a path to the HDFS site.xml files. Obtain a copy of your core-site.xml and hdfs-site.xml files from your HDP cluster and place them somewhere on the HDF hosts running NiFi. Point to these files using the "Hadoop Configuration Resources" processor property. example: Thanks, Matt
... View more
03-03-2017
05:23 AM
Uninstalled and installed again. Installed HDF mpack. Thanks @Jay SenSharma
... View more
02-24-2017
01:51 PM
Can you take a thread dump and provide the output here? ./bin/nifi.sh dump /path/to/output/dump.txt
... View more