Member since
07-30-2019
3398
Posts
1621
Kudos Received
1001
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 483 | 11-05-2025 11:01 AM | |
| 373 | 11-05-2025 08:01 AM | |
| 596 | 11-04-2025 10:16 AM | |
| 734 | 10-20-2025 06:29 AM | |
| 874 | 10-10-2025 08:03 AM |
11-28-2016
07:41 PM
1 Kudo
@Mothilal marimuthu Those processor were not introduced until Apache NiFi 1.0 / HDF 2.0. You screen shot shows you running NiFi 0.3 / HDF 1.1.
... View more
11-28-2016
07:08 PM
The bootstrap port has nothing to do with the web UI port. Please take a look at the nifi-app.log for the cause of the shutdown.
... View more
11-28-2016
07:06 PM
@Sanaz Janbakhsh You should see why NiFi shutdown in either the nifi-bootstrap.log or the nifi-app.log.
... View more
11-21-2016
08:53 PM
Very possible it is related to that bug. With regular queues in excess of the swap threshold of 20,000 FlowFiles, swapping will occur. It is a bug in that swapping that can result in those swapped FlowFiles not getting removed from the content repo. This bug continues to occur until eventually you run out of disk space. On restart all that "orphaned" FlowFile content is then removed because their are no FlowFiles referencing that content anymore. Matt
... View more
11-21-2016
08:39 PM
@Philippe Marseille Apache NiFi 1.1 should be going up for vote very soon..
... View more
11-21-2016
07:36 PM
1 Kudo
@Philippe Marseille The content size displayed in a the UI will not map exactly to disk utilization since Nifi stores multiple FlowFiles in a single claim in the content repo. A claim cannot be deleted until Every FlowFile in contains has reached a point of termination in your dataflow. so it is possible with 450,000 queued FlowFiles you are holding on to a large number of claims still. Try clearing out some of this backlog and see if disk usage drops. Setting backpressure thresholds on connections is good way to prevent your queues from getting so large. Another possibility is that you are running in to https://issues.apache.org/jira/browse/NIFI-2925 . This bug has been addressed for the next Apache NiFi release of 1.1 and HDF 2.1 Thanks, Matt
... View more
11-18-2016
09:06 PM
The source NiFi will initially communicate with the target cluster over the same HTTP(s) port you would use to access the target NiFi cluster's UI. After that initial communication the target cluster will provide your source NiFi with the configured nifi.remote.input.host and nifi.remote.input.port for each node in teh target cluster along with teh current load on each node. If you left the nifi.remote.input.host blank, Java will try to determine the hostname. This may result in either an internal hostname your source can not resolve or even just localhost. I highly recommend setting this property to a public facing FQDN for each node in your cluster. Matt
... View more
11-18-2016
09:00 PM
1 Kudo
@Shishir Saxena Your source NiFi will need to be able to communicate directly with both nodes in your target cluster in order to load-balance data. What version of NiFi are you using?
... View more
11-18-2016
08:52 PM
I think you have it.
... View more
11-18-2016
08:21 PM
@Toky Raobelina Did the information provided help resolve your S2S issue?
... View more