Member since
07-30-2019
3348
Posts
1612
Kudos Received
986
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
53 | 09-23-2025 08:56 AM | |
26 | 09-23-2025 05:58 AM | |
82 | 09-22-2025 02:12 PM | |
26 | 09-22-2025 06:37 AM | |
49 | 09-16-2025 11:51 AM |
09-23-2025
10:36 AM
1 Kudo
@Bern this is your new question without a accepted solution. So a bit confused by your last response. Matt
... View more
09-23-2025
10:09 AM
@AlokKumar NiFi allows very granular authorizations down to the individual component. A component such as a processor will inherit its authorizations from the Process Group in which it resides, IF there are no explicit policies set directly in the processor itself. Likewise, a Process group will inherit it's authorization from it's parent Process Group if it does not have explicit policies set directly on that child process group. When you launch NiFi for the very first time, NiFi will create the root Process Group for you and it will have the name "NiFi Flow". It is the UI canvas you see when you access the UI. Form the second image you shared we can see that you have access the "policies" for a child process group named "Copy of ProcessGroupAdminTest". What we can also see from this is that it is inheriting the "view the component" policy from the root process group "NiFi Flow": This is why you will see the add user and delete options as greyed out. You need to first click "override" and choose either to start with no users or copy the current authorized users. After doing this you will be able to add additional user to this policy on this child process group. Keep in mind that once you override inheritance on this components policy(s), inheritance no longer applies to this component. Any changes to the policies set on the parent Process Group "NiFi Flow" will not get applied to this child Process Group. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-23-2025
08:56 AM
1 Kudo
@Bern The two outputs you shared are form two different Site-To-Site Reporting tasks. The first you shared is produced by the SiteToSiteBulletinReportingTask. Additional Details... The second you shared is produced by the SiteToSiteStatusReportingTask. Its fields will vary based upon the type of component. Additional Details... The exceptions you shared are bulletins only and always being reported as issue sending to http://node-1:8080/ node. I see all your other configuration are based off IPs. Are all you NiFi nodes able to properly resolve "node-1" to the correct IP address? Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-23-2025
05:58 AM
1 Kudo
@Bern For NiFi site-to-site (S2S), you can NOT have each node configured differently (other then each node's unique hostname being set). The way Site-To-Site works is as follows: The Destination URL is configured with a comma separated list of NiFi URLs for the hosts in the target NiFi cluster (adding a comma separated list allows S2S to still function if one of the nodes in the target cluster is down). So you can configure just one target URL if you want and it will still work. If you your NiFi cluster is secured, the the destination URLS must also be https urls. So S2S attempts to connect to the first URL in the list to fetch S2S details (number of nodes in cluster, cluster hostnames, is http enabled, RAW port of each node, load on each node, etc) about the target cluster. The S2S details are rechecked every 30 seconds to see if they have changed (for example adding another node or removing a node from target cluster). Then S2S uses that information to distribute FlowFile across all nodes in the destination NiFi cluster. The client (SitetoSiteStatusReprotingTask) dictates whether you want to use RAW or HTTP transport protocols. If using RAW, make sure the RAW port is not in use on any of the nodes already. Take a look in the nifi-app.log for the exception as it is likely to include a full stack trace with it that may shed more light on your issue. It would be hard for me to say exactly what you issue is unless i knew your NiFi setup (nifi.properties) and the specific configuration of your SiteToSiteStatusReporting Task. What do you encounter if you use HTTP instead of RAW transport protocol? I'd also suggest starting a new Community question as this new question is not related to your original question in this post. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-23-2025
05:26 AM
@HoangNguyen The "Deprecation log" has nothing to do with your running dataflows on your NiFi canvas. The deprecation log contains notifications about NiFi components you may be using that are deprecated. Deprecated components get removed in future Apache NiFi versions. This log is to make you aware of this so you can make dataflow design changes to stop using them before you migrate to newer Apache NiFi release. The NiFi Standard Log Files include the "bootstrap log, app log, and user log". The app log is where you will find alll your dataflow component based logging. In the logback.xml, "logger" will write to the nifi-app.log by default unless a specific "appender-ref is declared for the logger. NiFi app.log can produce a lot of logging, but to get it all you can adjust: <logger name="org.apache.nifi" level="INFO"/> to "DEBUG" instead of INFO. It will be very noisy. Logback standard log levels: OFF: This level turns off all logging. No messages will be outputted. ERROR: Indicates a serious error that might still allow the application to continue running, but requires attention. WARN: Indicates a potentially harmful situation that should be investigated, but does not necessarily prevent the application from continuing. INFO: Provides general information about the application's progress and significant events. DEBUG: Offers detailed information useful for debugging purposes, often including variable values and execution flow. TRACE: Provides even finer-grained information than DEBUG, typically used for extremely detailed tracing of execution paths. ALL: This level enables all logging, including messages at all other levels. Keep in mind that just because you set DEBUG log level, does not mean every component will produce DEBUG level log messages. It all depends on what logging exists within the component class and dependent libraries. When set to DEBUG, it will log DEBUG and all level below it (INFO, WARN, ERROR). If you set "INFO", you also get WARN and ERROR logging. NiFi user authorization logging will go to the nifi-user.log. This is logging related to access to NiFi. Nifi-bootstrap.log has logging for you rNiFi bootstrap process. The bootstrap is what is lauched when you execute the nifi.sh start command. The bootstrap then starts the nifi main child process whcih loads your NiFi and dataflows. The bootstrap then monitors that child process to make sure it is still live (restarts it automatically if it dies). Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-22-2025
02:12 PM
1 Kudo
@Bern I am having difficulty clearly understanding your question. I will start by saying that Apache NIFi 1.11.4 was released way back in 2019 and will have many unresolved CVEs. I strongly encourage you to at least upgrade to the latest available NiFi 1.x release 1.28.1 as I know migrating to Apache NiFi 2.x versions takes a good amount of planning and likely some dataflow redesign work. I think first we need to get our terminology correct, so we can communicate clearly on the issue/question. NiFi processors are what you add to the canvas that perform specific tasks. Processors will have connections that allows you to connect a processor with another component (processor, input port, output port, funnel, etc). NiFi Reporting Tasks are added via the NiFi controller and perform their function in the background. Then you also have NiFi Controller Services which are services that are used by other components (processors for example). The SiteToSiteStatusReportingTask NiFi reporting task has been a part of Apache NiFi since 1.2.0 release, so it does exist in your 1.1.4 version. The screenshot you shared is showing a bunch of Controller Services, so you are in the wrong UI for adding a Reporting Task. You can find and add NiFi Reporting task by clicking on the NIFi Global menu in the upper right corner of the UI and selecting "Controller Settings" from the displayed menu: From the UI that appears, you will be able to select the "Reporting Tasks" tab: Click the box to the far right with the "+" symbol to bring up the UI for selecting the Reporting task you wish to add. NOTE: The list of available Reporting Tasks will vary by Apache NIFi release version. What I actually think you will want to use is the SiteToSiteBulletinReportingTask reporting task. You can use this Reporting task to send bulletins that your processors are producing to a NiFi remote Input Port. Your processors generate ERROR bulletins by default when issues occur, so you can build a dataflow that will process these bulletins send to it via this reporting task and do alerting as you need. For example: send an email using the putEmail processor to alert someone about specific errors. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-22-2025
06:37 AM
1 Kudo
@Kumar1243 Try using the following spec: [
{
"operation": "shift",
"spec": {
"Product": [
"Product",
"to_PlndIndepRqmtItem[0].Product"
],
"Plant": [
"Plant",
"to_PlndIndepRqmtItem[0].Plant"
],
"MRPArea": [
"MRPArea",
"to_PlndIndepRqmtItem[0].MRPArea"
],
"PlndIndepRqmtType": [
"PlndIndepRqmtType",
"to_PlndIndepRqmtItem[0].PlndIndepRqmtType"
],
"PlndIndepRqmtVersion": [
"PlndIndepRqmtVersion",
"to_PlndIndepRqmtItem[0].PlndIndepRqmtVersion"
],
"RequirementPlan": [
"RequirementPlan",
"to_PlndIndepRqmtItem[0].RequirementPlan"
],
"RequirementSegment": [
"RequirementSegment",
"to_PlndIndepRqmtItem[0].RequirementSegment"
],
"PlndIndepRqmtPeriod": [
"PlndIndepRqmtPeriod",
"to_PlndIndepRqmtItem[0].PlndIndepRqmtPeriod"
],
"PlndIndepRqmtIsActive": "PlndIndepRqmtIsActive",
"NoWithdrawal": "NoWithdrawal",
"DeleteOld": "DeleteOld",
"PeriodType": "to_PlndIndepRqmtItem[0].PeriodType",
"PlannedQuantity": "to_PlndIndepRqmtItem[0].PlannedQuantity",
"UnitOfMeasure": "to_PlndIndepRqmtItem[0].UnitOfMeasure",
"ProductionVersion": "to_PlndIndepRqmtItem[0].ProductionVersion"
}
}
] You can use the JoltTransformRecord or JoltTransformJson processors. The JoltTransfromRecord will allow you to define a schema for your multi-record input FlowFiles. The JoltTransformJson processor would require you to split you source FlowFile first so you have one record per FlowFile. Hope this helps you get closer to success,. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-22-2025
05:32 AM
@HoangNguyen As long as you are running a new enough version of Apache NiFi, you'll have an option with the process group configuration to set a logging suffix. For each process group you want a separate log file, create a unique suffix. In above example I used the suffix "extracted". In my NiFi "logs" directory, I now have a new "nifi-app-extracted.log" file that contains the logging output of every component contained within that process group. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-18-2025
08:26 AM
1 Kudo
@asand3r JVM Garbage collection is stop-the-world which would prevent for the duration of that GC event the Kafka clients from communicating with Kafka. If that duration of pause is long enough I could cause Kafka to do a rebalance. I can't say that you are experiencing that . Maybe put the consumeKafka processor class in INFO level logging and monitor the nifi-app.log for any indication of rebalance happening. When it comes GC pauses, a common mistake I see is individuals setting the JVM heap settings in NiFi way to high simply because the server on which they have in stalled NiFi has a lot of installed memory. Since GC only happens once the allocated JVM memory utilization reaches around 80%, large heaps could lead to long stop-the-world if there is a lot top clean-up to do. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-16-2025
11:51 AM
1 Kudo
@AlokKumar Then you'll want to build your dataflow around the HandleHTTPRequest and HandleHTTPResponse processors. You build your processing between those two processors or maybe you have multiple HandleHTTPResponse processors to control the response to the request based on the outcome of your processing. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more