Member since
07-30-2019
3133
Posts
1565
Kudos Received
909
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
156 | 01-09-2025 11:14 AM | |
888 | 01-03-2025 05:59 AM | |
439 | 12-13-2024 10:58 AM | |
497 | 12-05-2024 06:38 AM | |
392 | 11-22-2024 05:50 AM |
11-21-2016
07:36 PM
1 Kudo
@Philippe Marseille The content size displayed in a the UI will not map exactly to disk utilization since Nifi stores multiple FlowFiles in a single claim in the content repo. A claim cannot be deleted until Every FlowFile in contains has reached a point of termination in your dataflow. so it is possible with 450,000 queued FlowFiles you are holding on to a large number of claims still. Try clearing out some of this backlog and see if disk usage drops. Setting backpressure thresholds on connections is good way to prevent your queues from getting so large. Another possibility is that you are running in to https://issues.apache.org/jira/browse/NIFI-2925 . This bug has been addressed for the next Apache NiFi release of 1.1 and HDF 2.1 Thanks, Matt
... View more
11-18-2016
09:06 PM
The source NiFi will initially communicate with the target cluster over the same HTTP(s) port you would use to access the target NiFi cluster's UI. After that initial communication the target cluster will provide your source NiFi with the configured nifi.remote.input.host and nifi.remote.input.port for each node in teh target cluster along with teh current load on each node. If you left the nifi.remote.input.host blank, Java will try to determine the hostname. This may result in either an internal hostname your source can not resolve or even just localhost. I highly recommend setting this property to a public facing FQDN for each node in your cluster. Matt
... View more
11-18-2016
09:00 PM
1 Kudo
@Shishir Saxena Your source NiFi will need to be able to communicate directly with both nodes in your target cluster in order to load-balance data. What version of NiFi are you using?
... View more
11-18-2016
08:52 PM
I think you have it.
... View more
11-18-2016
08:21 PM
@Toky Raobelina Did the information provided help resolve your S2S issue?
... View more
11-18-2016
06:49 PM
2 Kudos
@Toky Raobelina When you add an RPG to the graph, the URL you provide in its configuration should be the same URL you would use to access that target NiFi's UI. You will then select whether to use RAW or HTTP protocols. With RAW (legacy format) your source NiFi will send data across port 8022. With HTTP protocol the Source NiFi will send data across port 8070. So you need to make sure with RAW that your source NiFi can communicate with port 8022 on the target NiFi. Based on what you shared, it sounds more like you have set up an entry point on the target NiFi to accept the data you want to send to it. On the target NiFi you will need to add an input port to your canvas. This port must be added at the root level to be used for S2S. BY root level, i mean it cannot be contained within a process group on the target NiFi. So on the canvas of your target NiFi, you will have a flow that looks similar to this: It may take a couple minutes for the input port to show up on the RPG located on the Source NiFi. You can always right click on the RPG and select "Refresh". On your Source NiFi you will have a flow similar to this: Once the input port is set to run and you have enabled transmission on your RPG, data should start flowing unless a firewall is blocking the connection on part 8022 as mentioned above. Thanks, Matt
... View more
11-18-2016
05:43 PM
@Toky Raobelina Are the configs provided above from the system you are trying to send data to?
Did you add an "input port" to the root (top level) if the canvas on that server?
So I am gathering you were able to add the RPG, but it is reporting "no input ports available"? Matt
... View more
11-17-2016
11:21 PM
hey @Jobin George, Confirm that G1GC is still set as the garbage collector in your bootstrap.conf. There really isn't much more to this reporting task. How long did you leave it running? Do you have an active dataflow that is constantly using JVM memory? By default this reporting task only runs every 5 minutes. So at the time it runs the threshold would need to be exceeded. Could it be possible that your memory usage is going up and down but overall is low enough to just not trigger? Try running a constant flow of data that is allowed to queue on some connections. Since FlowFile attributes live in memory up to the swap threshold, there would be a constant heap memory usage. See if that does not cause it to trigger. Thanks, Matt
... View more
11-17-2016
10:55 PM
1 Kudo
@Jobin George Are you seeing anything in the nifi-app.log? I configured this reporting task on a instance of HDF 2.0 and it appears to be working. Thank you, Matt
... View more
11-17-2016
07:55 PM
1 Kudo
I would never suggested someone should remove the authorizers.xml file. Both the users.xml and authorizations.xml files are built from the configurations in the authorizers.xml. Did you try providing the absolute paths to your keystore and trustsore jks files in your nifi.properties file?
... View more