Member since
06-26-2015
515
Posts
138
Kudos Received
114
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2257 | 09-20-2022 03:33 PM | |
| 6010 | 09-19-2022 04:47 PM | |
| 3236 | 09-11-2022 05:01 PM | |
| 3702 | 09-06-2022 02:23 PM | |
| 5772 | 09-06-2022 04:30 AM |
08-01-2022
05:41 AM
@hegdemahendra 1. Do you see any logging related to the content_repository. Perhaps something related to NiFi not allowing writes to the content repository waiting on archive clean-up? 2. Is any outbound connection from the handleHTTPRequest processor red at the time of the pause? This indicates backpressure is being applied which would stop the source processor from being scheduled until back pressure ends. 3. How large is your Timer Driven thread pool? This is the pool of threads from which the scheduled components can use. If it is set to 10 and and all are currently in use by components, the HandleHTTPRequest processor , while scheduled, will be waiting for a free thread from that pool before it can execute. Adjusting the "Max Timer Driven Thread pool" requires careful consideration of average CPU load average across on every node in your NiFi cluster since same value is applied to each node separately. General starting pool size should be 2 to 4 times the number of cores on a single node. Form there you monitor CPU load average across all nodes and use the one with the highest CPU load average to determine if you can add more threads to that pool. If you have a single node that is always has a much higher CPU load average, you should take a closer look at that server. Does it have other service running on it tat are not running on other nodes? Does it unproportionately consistently have more FlowFiles then any other node (This typically is a result of dataflow design and not handling FlowFile load balancing redistribution optimally.)? 4. How many concurrent tasks on your HandleHttpRequest processor. The concurrent tasks are responsible for obtaining threads (1 per concurrent task if available) to read data from the Container queue and create the FlowFiles. Perhaps the request come in so fast that there are not enough available threads to keep the container queue from filling and thus blocking new requests. Hope the above helps you get to the root of your issue. If you found this response assisted with your query, please take a moment to login and click on "Accept as Solution" below this post. Thank you, Matt
... View more
07-31-2022
06:33 PM
@Domo , I'm not familiar with Cassandra to answer the question. I see, though, that there's a Jira for the upgrade of the Cassandra driver to version 4, but that hasn't got traction yet. You might want to enquire in the Jira itself. Cheers, André
... View more
07-31-2022
06:21 PM
@KhASQ , Besides @SAMSAL solution, you can also use ReplaceText to eliminate the need of extracting the entire content as an attribute. You'd still have to set a large enough buffer, though, to ensure your largest message could be processed. Cheers, André
... View more
07-31-2022
02:44 PM
@NJK , The availability of Atlas (and the number of nines you get) will depend on your implementation. Check this page for more information on Atlas high availability options. The more independent server you have backing the Atlas service, the more nines you'll get. Cheers, André
... View more
07-22-2022
08:59 AM
Hi All, The problem was on the JDK version. We were using OpenJDK 11.0.2 which had a bug in the TLS handshake. Solution: Upgrade JDK (now using 11.0.15).
... View more
07-18-2022
10:19 AM
@Neera456 Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. If you are still experiencing the issue, can you provide the information requested? Thanks!
... View more
07-13-2022
11:20 PM
1 Kudo
@Lewis_King , Here's an idea. You can fork the "a" output of the QueryRecord processor and send it to a sequence of processors as shown below: The ReplaceText processor will simply replace the entire contents of the flowfile with the information you want to register in the log. For example: This will produce one row per flow file with the source type ("a") and the timestamp. You can them send these to a MergeRecord to avoid saving to many small log files and them to a PutFile to persist the log. You can process the "b" output in a similar way. Cheers, André
... View more
07-13-2022
07:57 AM
@Drozu, Have any of the replies helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
07-13-2022
06:45 AM
1 Kudo
@MarioFRS , You can set it to the following: @([^@]*)@ Cheers, André
... View more