Member since
07-30-2019
3419
Posts
1624
Kudos Received
1009
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 119 | 01-09-2026 06:58 AM | |
| 480 | 12-17-2025 05:55 AM | |
| 541 | 12-15-2025 01:29 PM | |
| 552 | 12-15-2025 06:50 AM | |
| 405 | 12-05-2025 08:25 AM |
01-11-2024
01:41 PM
1 Kudo
@enam Looks like you have a bad file filter regex in your listSFTP processor configuration. .*file.*\.xls Above looks for any character for 1 or more characters until is finds the last occurrence of string "file" followed by any character for as many characters until last occurrence of string ".xls". However, all your filenames start with "file" and have no characters before it. Try modifying your file filter regex by removing the ".*" before "file": file.*\.xls Right click on processor, select "view state", and then "clear state". Then start the listSFTP processor again to see if generated NiFi FlowFiles for each file on your SFTP server. If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-11-2024
01:28 PM
1 Kudo
@Anderosn Is your InvokeHTTP processor triggered by a FlowFile from an inbound connection to the processor or does it have no inbound connections and executes purely based on configured run schedule? This is one of very few processors where an inbound connection is optional, but behavior is different dependent on the configuration chosen. With no inbound connection there is no FlowFile to "retry" when you encounter "failure" or "No retry" result from execution. Because really it is retrying every time it executes essentially with no inbound connection. You could use a GenerateFlowFile processor to feed an empty trigger FlowFile to the invokeHTTP processor to trigger its execution. This would then give you a FlowFile that Retry configuration can use. If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-11-2024
01:22 PM
@JamesZhang Certainly a challenging issue you have here. The shared output all points to good certificates, but gets you no closer to why the mutualTLS exchange between your two Nifi nodes is no yielding a successful mutual TLS handshake. I guess I would start by looking at the configuration of NiFi on both nodes to make sure configurations in the nifi.properties files on both nodes match. Verify that both nodes NiFi's are using same Java version. You may need to look at the network traffic between both nodes as well. Is there some device (load balancer, firewall, etc) between those nodes on the network that may be interfering with the certificate exchange. Matt
... View more
01-11-2024
01:16 PM
1 Kudo
@glad1 Based on what you shared, you may be able to accomplish what you are trying to do using a couple additional processors and a Child Process Group that utilizes "Process Group FlowFile Concurrency" settings. So your use case involves: For each 1 FlowFile output by your ExecuteSQLRecord processor you want to do ALL the following before the next FlowFile produced by ExecuteSQL record is processed: 1. Split the FlowFIle into X number of FlowFiles using SplitJson. 2. Modify each one of the produced Split FlowFiles using UpdateRecord. 3. Write all those modified FlowFiles ot another Database using PutDatabaseRecord 4. ExecuteSQL only once to update Record that all splits were processed. Then repeat above for next produce FlowFile If this is correct, here is what you might want to try: 1. create a Process Group that you will insert in teh middle of this flow as shown in the following image: 2. Configure that Process group as follows: Important properties to set here are: - Process Group FlowFile Concurrency = Single FlowFile Per Node - Process Group Outbound Policy = Batch Output What this does is allow only one FlowFile (per node in multi-node NiFi) to enter this PG at any given time. Inside this Process group you will have handle the processing of this FlowFile (split, update, putDB). The outbound policy will not release any of the Produced Splits from teh PG until all are queued at the output port. You'll notice I added one additional optional processor ModifyBytes to your dataflow (configured with "Remove all Content = true). This will zero out the content on the FlowFiles after they were written using the PutDatabaseRecord processor. Then those FlowFiles with no content now are sent to connection feeding output port where they are held until all splits produced are queued. They will then all be released at once from the PG to the next new processor MergeContent. The MergeContent processor will merge all those FlowFiles into a single FlowFile that feeds your ExecuteSQL (UpdateRecordStatus toP) processor. Now you have a single notification for the original FlowFile that was split and processed. Additionally you have created separation between each source FlowFile processed at start of dataflow. If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-10-2024
01:23 PM
@Sartha I think i may now be understanding the confusion here. MiNiFi does not alter in any way the functionality of the components (NiFi processor, controller services, etc). They all function exactly as they do in NiFi. As I understand it now, your use case is to tail a file located on a different server from where NiFi or MiNiFi is installed. Correct? Neither NiFi or MiNiFi can tail a log file located on another server. The TailFile processor can only tail files local to the same server on which NiFi or MiNiFi is running. So MiNiFi would need to be installed on the server where the log file exists in order to tail that file. MiNiFi installed on that server would then be able to transfer the generated FlowFile over the network to your NiFi running on your local machine. MiNiFi is essentially NiFi without the UI (few other differences). Either NiFi or MiNiFi would need to be installed in that server. If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-10-2024
01:09 PM
1 Kudo
@Alexy Sorry about the confusion, it shoudl have said hours instead of days in my response: 2. Max History does not count incremental (%i) logs generated. It is based on the date pattern used %d{yyyy-MM-dd_HH}. So you role logs every hour, so MaxHistory would retain 10 hours unless TotalSizeCap is reached before 10 hours of logs are created. So you example here: nifi-app_2024-01-09_11.0.log
nifi-app_2024-01-09_11.1.log
nifi-app_2024-01-09_11.2.log
nifi-app_2024-01-09_11.3.log Implies you are using the pattern: <fileNamePattern>${org.apache.nifi.bootstrap.config.log.dir}/nifi-app_%d{yyyy-MM-dd_HH}.%i.log</fileNamePattern> This also means you are using "maxFileSize" property as well that controls when each incremental hour log is rolled. Without a "TotalSizeCap" property set, logs will be retained for 2 hours or until you run out of disk space since there is no boundary on the number of incremental logs that may roll within each of those 2 hours. nifi-app_2024-01-09_11.0.log
nifi-app_2024-01-09_11.1.log
nifi-app_2024-01-09_11.2.log
nifi-app_2024-01-09_11.3.log
nifi-app_2024-01-09_11.4.log
nifi-app_2024-01-09_11.5.log
...
nifi-app_2024-01-09_12.0.log
nifi-app_2024-01-01_12.1.log
nifi-app_2024-01-09_12.2.log
nifi-app_2024-01-09_12.3.log
nifi-app_2024-01-09_12.4.log
... If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-10-2024
12:57 PM
@pratschavan FetchFile is typically used in conjunction with ListFile so that it only fetches the content for the FlowFile it is passed. ListFile would only list the file once. If you are using only the FetchFile processor, I am guessing you configured the "File to Fetch" property with the absolute path to you file. Using this processor in this way means that it will fetch that same file every time it is scheduled to execute via the processor's "Scheduling" tab configuration. Can you share screenshots of how you have these two processors configured? If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-10-2024
12:45 PM
@FrankHaha Have you tried using the "Infer Schema" Schema Access Strategy in the JsonTreeReader 1.24.0 controller services instead of fetching schema from AvroSchemaRegistry? Another option would be to use the ExtractRecordSchema 1.24.0 processor along with JsonTreeReader 1.24.0 controller services configured with "Infer Schema" Schema Access Strategy" to output the schema into a FlowFile Attribute "avro.schema". You can then get the produced schema from that FlowFile Attribute to add to your AvroSchemaRegsitry for future use. If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-09-2024
07:25 AM
@Sartha I am very confused on your use case. here my case is I want to read log files presented in nifi not from minifi so provided file to tail path as nifi-app.log. If you are just trying to tail the nifi-app.log, what do you need MiNiFi for here... Let step back and just get a clear understanding of yoru use case. What are you trying to accomplish? Remove NiFi from the equation and define your use case. i.e - I have server1 and server2, I have some service running on server 2 writing log files. I want to read those logs as they are being written to and send that log output to server 1. etc... As more detail the better. As far as processors go, there is detailed documentation for each one. https://nifi.apache.org/docs/nifi-docs/ Understanding which NiFi components you need to use starts with a detailed use case. I am still not clear on your use case and where MiNiFi fist in to it if you are trying to tail the nifi-app.log. Tailing the nifi-app.log implies you have nifi runing on the server where the nifi-app.log is being written to, thus you could just use NiFi to tail its own nifi-app.log. But why do this? What are you trying to capture from nifi-app.log? What are planning to so with this log output once you tail it? Thank you, Matt
... View more
01-09-2024
07:03 AM
@JamesZhang The logs shared indicate a TLS exchange issue. Have you looked at the output of openssl to see what your running NiFi responds with: openssl s_client -connect runtime-0.runtime-statefulset.default.svc.cluster.local:443 -showcerts and openssl s_client -connect runtime-1.runtime-statefulset.default.svc.cluster.local:443 -showcerts
... View more