Member since
07-30-2019
3387
Posts
1617
Kudos Received
999
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 133 | 11-05-2025 11:01 AM | |
| 378 | 10-20-2025 06:29 AM | |
| 518 | 10-10-2025 08:03 AM | |
| 358 | 10-08-2025 10:52 AM | |
| 394 | 10-08-2025 10:36 AM |
06-01-2017
03:23 PM
1 Kudo
@Alvaro Dominguez The primary node could change at anytime. You could use postHTTP and listenHTTP processor to route FlowFiles from multiple nodes to a single node. My concern would be heap usage to merge (zip) 160K FlowFiles on a single NiFi node. The FlowFile metadata for all those FlowFiles being zipped would be help in heap memory until the zip is complete. Any objection to having a zip of zips? In other words you could still create 4 unique zip files (1 per node each with unique filename), then send these zipped files to one node to be zipped once more in to a new zip with the single name you want written into HDFS. Thanks, Matt
... View more
06-13-2017
03:25 PM
@Oleksandr Solomko have you changed the default value of the "nifi.queue.swap.threshold" property in nifi.properties? If so, you may be running into NIFI-3897.
... View more
06-01-2017
11:23 AM
@Simran Kaur I had a feeling your issue was related to a missing config. Glad to hear you got it working. If this answer addressed your original question, please mark it as accepted. As far as your other question goes, I see you already started a new question (https://community.hortonworks.com/questions/105720/nifi-stream-using-listenhttp-processor-creates-too.html). That is the correct approach in this forum, we want to avoid asking unrelated questions in the same post. I will have a look at that post as well. Thank you, Matt
... View more
06-02-2017
02:16 PM
thanks @Matt Clarke. Will downgrade asap.
... View more
11-16-2018
01:06 PM
Article content updated to reflect new provenance implementation recommendation and change in JVM Garbage Collector recommendation.
... View more
05-25-2017
07:23 PM
As Matt pointed out, in order to make use of 100 concurrent tasks on a processor, you will need to increase Maximum Timer Driver Thread Count over 100. Also, as Matt pointed out, this would mean on each node you have this many threads available. As far as general performance... the performance of a single request/response with Jetty depends on what is being done in the request/response. We can't just say "Jetty can process thousands of records in seconds" unless we know what is being done with those records in Jetty. If you deployed a WAR with a servlet that immediately returned 200, that performance would be a lot different than a servlet that had to take the incoming request and write it to a database, an external system, or disk. With HandleHttpRequest/Response, each request becomes a flow file which means updates to the flow file repository and content repository, which means disk I/O, and then transferring those flow files to the next processor which reads them which means more disk I/O. I'm not saying this can't be fast, but there is more happening there than just a servlet that returns 200 immediately. What I was getting with the last question was that if you have 100 concurrent tasks on HandleHttpRequest and 1 concurrent task on HandleHttpResponse, eventually the response part will become the bottle neck.
... View more
05-25-2017
02:22 PM
Please take a look at @Matt Clarke's response above on how to extract csv files only. It is the most straight forward way.
... View more
05-24-2017
01:52 PM
@Bhushan Babar Glad i was able to help resolve your issue. Could you please click "accept" the answer i provided to close out this question in the community? Thank you,
Matt
... View more
05-24-2017
12:06 PM
@regie canada The extractText processor creates FlowFile attributes from the extracted text. NiFi has an AttributesToJSON processor you can use to generate JSON form these created attributes. For new questions, please open a new question. It makes it easier for community users to search for answers. Thanks, Matt
... View more
05-19-2017
08:17 AM
@Matt Clarke Thank you very very much, your answer was very useful for me
... View more