Member since
07-30-2019
3397
Posts
1619
Kudos Received
1001
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 480 | 11-05-2025 11:01 AM | |
| 369 | 11-05-2025 08:01 AM | |
| 590 | 11-04-2025 10:16 AM | |
| 729 | 10-20-2025 06:29 AM | |
| 869 | 10-10-2025 08:03 AM |
05-04-2018
03:57 PM
@Prakhar Agrawal @Felix Albani is correct. There is no way to automatically have a node delete his flow.xml.gz in favor of the clusters flow. If we allowed that it could lead to unexpected data loss. Lets assume a node was taken out of the cluster do perform some side work and the user tries to rejoin it to cluster, if it just took the clusters flow, any data queued in a connection that doe snot exist in clusters flow would be lost. It would be impossible for Nifi to know if the joining of this node to this cluster was a mistake or intended, so NiFi simply informs you there is a mismatch and expects you to resolve the issue. - Also noticed you mentioned "NCM" (NiFi Cluster Manager). NIFi moved away from having a NCM staring with Apache NIFi 1.x version. Newer version have a zero master cluster where any connected node can be elected as the cluster's coordinator. - Thanks, Matt
... View more
05-23-2018
03:31 PM
Nice thanks, I figured there had to be a way to tell it that it was a solo node but I just wasn't phrasing it right for google apparently. Though the problem ended up being solved with a simple delete/reinstall.
... View more
04-27-2018
06:11 AM
@Matt Clarke Thanks Matt for the information..and helping out...it worked
... View more
04-27-2018
05:43 AM
Thanks . I was not accessing the api with correct syntax .
... View more
04-24-2018
12:08 PM
The input file is in the text/html format, and the output file must be in csv format, because it will feed the database.
... View more
04-30-2018
06:15 PM
@Olivier
Drouin @Xavier
COUDRE
Did you get your Site-To-Site working, If you found the answer below helpful, please take a moment to login and click "accept" below the answer.
... View more
04-17-2018
01:34 PM
@Matt Clarke, seems quite refined approach. happy to see your response.
... View more
11-21-2018
03:59 PM
Thanks for your answer, I wanted to have only "one" queue were all flowfiles would be waiting.I know now that it was i bad idea => I reduced the size of the queue and now use backpresure. It corrected the priority problem. Thanks again !
... View more
04-12-2018
02:20 PM
2 Kudos
@Bharadwaj Bhimavarapu
Processors within the body of a dataflow should NOT be configured to use the "Primary node" only "execution" strategy. The only processors that should be scheduled to run on "Primary node" only would be data ingest type processors that do not use cluster friendly type protocols. The most common non-cluster friendly ingest processors can be found to have "List<type>" processor names (ListSFTP, ListHDFS, ListFTP, ListFile, ....). - When a node is no longer elected as the primary node, it will stop scheduling only those processors set for "Primary node" only execution. All other processors will continue to execute. The newly elected primary node will begin executing its "Primary node" only scheduled processors. These processors generally are designed to record some cluster wide state on where previous primary node execution left off so the same processor executing on the new primary node picks up where other left off. - This is why it is important that any processor that takes a incoming connection from another processor is not scheduled for "Primary node" only execution. If primary node changes you still want original primary node to continue processing the data queued downstream of the "primary node" only ingest processor. - There is no way to specify a specific node in a NiFi cluster to be the primary node. It is important to make sure that any one of your nodes is capable of executing the primary node processors at any time. - Zookeeper is responsible for electing both the primary node and cluster coordinator in a NiFi cluster. If your GC cycles are affecting the ability of your nodes to communicate with ZK in a timely manor, this may explain the constant election changes by ZK in your cluster. My suggestion would be to adjust the ZK timeouts in NiFi here (defaults are only 3 secs which is far from ideal in a production environment). The following properties can be found in the nifi.properties file: nifi.zookeeper.session.timeout=60 secs
nifi.zookeeper.connect.timeout=60 secs *** If using Ambari to mange your HDF cluster, make the above changes via nifi configs in Ambari. - Thanks, Matt - If you found this answer addressed you initial question, please take a moment to login and click "accept" on the answer.
... View more
04-11-2018
06:55 PM
@Zack Atkinson Make sure that every node in your NiFi Cluster can resolve the hostnames for every other node in your NiFi cluster. Make sure that all NiFi nodes can resolve and reach the configured zookeeper servers. Make sure the following properties are set and their are no typos (including leading or trailing whitespaces) in the nifi.properties file: nifi.zookeeper.connect.string <-- should be set to resolvable hostnames for zookeeper servers nifi.web.https.host or nifi.web.http.host <-- should be set to resolvable hostname for server nifi.cluster.is.node <-- should be set to resolvable hostname for server What is seen in the nifi-app.log around timeframe issue occurs? Is there a full stack trace with this error? Thanks, Matt
... View more