Member since
07-30-2019
3349
Posts
1612
Kudos Received
988
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
72 | 09-23-2025 08:56 AM | |
46 | 09-23-2025 05:58 AM | |
43 | 09-23-2025 05:26 AM | |
102 | 09-22-2025 02:12 PM | |
39 | 09-22-2025 06:37 AM |
09-24-2025
05:55 AM
@AlokKumar NiFi authorization policies are very granular. A user will not have access to flow development icon across top of the UI unless that user is authorized within the currently access process group to "modify the component", but you would also want those same users to also be authorized for "view the component". Now depending on what additional access you want each user to have, you'll probably be authorizing them for even more NiFi policies. Keep in mind that by adding a user to the "modify the component" authorization policy in the "Copy of ProcessGroupAdminTest" process group will only give that user the ability to add and modify components within that process group and any child/sub process group of "Copy of ProcessGroupAdminTest" Process group (if a child/sub process group is not inheriting authorizations from the parent Process group, then user you add to parent would not have same access to chid/sub Process group). NiFi has "Global Access Policies" and "Component Access Policies" The Global Access Policies are set by accessing the NiFi Global menu (three horizontal lines in upper right corner of the NiFi UI) and then "Policies". If you hover your cursor over the access policy, it will pop-up a description of what the policy grants access for: The levels of access you want to provide your individual users/teams is completely up to you. ALL users must be authorized for the "View the User Interface" global access policy in order to access the NiFi UI, but that does not give the user much access beyond that. So you need to decide which user will be building dataflows, NiFi refers to them as DataFlow Managers (DFM)s. Then you may also have operators which you only grant ability to "view the component" and "operate the component" certain dataflows with no authorization to modify or view the data. The component level access policies are set by clicking on the "key" icon for a selected component. For example: Below I have clicked on the "GenerateFlowFile" processor as we can see it in the "operation" panel to its left. Inside that Operate panel, your admin user (or other users you have authorized) will have access to the policies for that component. Granting a user "view the component" and "modify the component" on a process group will give that user the ability to build and operate dataflows. But that user will still not be authorized to view the content of the FlowFiles traversing that dataflow, empty a connection queue, or view the provenance data produced by those components unless you set that additional authorizations. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-23-2025
10:36 AM
1 Kudo
@Bern this is your new question without a accepted solution. So a bit confused by your last response. Matt
... View more
09-23-2025
10:09 AM
@AlokKumar NiFi allows very granular authorizations down to the individual component. A component such as a processor will inherit its authorizations from the Process Group in which it resides, IF there are no explicit policies set directly in the processor itself. Likewise, a Process group will inherit it's authorization from it's parent Process Group if it does not have explicit policies set directly on that child process group. When you launch NiFi for the very first time, NiFi will create the root Process Group for you and it will have the name "NiFi Flow". It is the UI canvas you see when you access the UI. Form the second image you shared we can see that you have access the "policies" for a child process group named "Copy of ProcessGroupAdminTest". What we can also see from this is that it is inheriting the "view the component" policy from the root process group "NiFi Flow": This is why you will see the add user and delete options as greyed out. You need to first click "override" and choose either to start with no users or copy the current authorized users. After doing this you will be able to add additional user to this policy on this child process group. Keep in mind that once you override inheritance on this components policy(s), inheritance no longer applies to this component. Any changes to the policies set on the parent Process Group "NiFi Flow" will not get applied to this child Process Group. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-23-2025
08:56 AM
1 Kudo
@Bern The two outputs you shared are form two different Site-To-Site Reporting tasks. The first you shared is produced by the SiteToSiteBulletinReportingTask. Additional Details... The second you shared is produced by the SiteToSiteStatusReportingTask. Its fields will vary based upon the type of component. Additional Details... The exceptions you shared are bulletins only and always being reported as issue sending to http://node-1:8080/ node. I see all your other configuration are based off IPs. Are all you NiFi nodes able to properly resolve "node-1" to the correct IP address? Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-23-2025
05:58 AM
1 Kudo
@Bern For NiFi site-to-site (S2S), you can NOT have each node configured differently (other then each node's unique hostname being set). The way Site-To-Site works is as follows: The Destination URL is configured with a comma separated list of NiFi URLs for the hosts in the target NiFi cluster (adding a comma separated list allows S2S to still function if one of the nodes in the target cluster is down). So you can configure just one target URL if you want and it will still work. If you your NiFi cluster is secured, the the destination URLS must also be https urls. So S2S attempts to connect to the first URL in the list to fetch S2S details (number of nodes in cluster, cluster hostnames, is http enabled, RAW port of each node, load on each node, etc) about the target cluster. The S2S details are rechecked every 30 seconds to see if they have changed (for example adding another node or removing a node from target cluster). Then S2S uses that information to distribute FlowFile across all nodes in the destination NiFi cluster. The client (SitetoSiteStatusReprotingTask) dictates whether you want to use RAW or HTTP transport protocols. If using RAW, make sure the RAW port is not in use on any of the nodes already. Take a look in the nifi-app.log for the exception as it is likely to include a full stack trace with it that may shed more light on your issue. It would be hard for me to say exactly what you issue is unless i knew your NiFi setup (nifi.properties) and the specific configuration of your SiteToSiteStatusReporting Task. What do you encounter if you use HTTP instead of RAW transport protocol? I'd also suggest starting a new Community question as this new question is not related to your original question in this post. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-23-2025
05:26 AM
@HoangNguyen The "Deprecation log" has nothing to do with your running dataflows on your NiFi canvas. The deprecation log contains notifications about NiFi components you may be using that are deprecated. Deprecated components get removed in future Apache NiFi versions. This log is to make you aware of this so you can make dataflow design changes to stop using them before you migrate to newer Apache NiFi release. The NiFi Standard Log Files include the "bootstrap log, app log, and user log". The app log is where you will find alll your dataflow component based logging. In the logback.xml, "logger" will write to the nifi-app.log by default unless a specific "appender-ref is declared for the logger. NiFi app.log can produce a lot of logging, but to get it all you can adjust: <logger name="org.apache.nifi" level="INFO"/> to "DEBUG" instead of INFO. It will be very noisy. Logback standard log levels: OFF: This level turns off all logging. No messages will be outputted. ERROR: Indicates a serious error that might still allow the application to continue running, but requires attention. WARN: Indicates a potentially harmful situation that should be investigated, but does not necessarily prevent the application from continuing. INFO: Provides general information about the application's progress and significant events. DEBUG: Offers detailed information useful for debugging purposes, often including variable values and execution flow. TRACE: Provides even finer-grained information than DEBUG, typically used for extremely detailed tracing of execution paths. ALL: This level enables all logging, including messages at all other levels. Keep in mind that just because you set DEBUG log level, does not mean every component will produce DEBUG level log messages. It all depends on what logging exists within the component class and dependent libraries. When set to DEBUG, it will log DEBUG and all level below it (INFO, WARN, ERROR). If you set "INFO", you also get WARN and ERROR logging. NiFi user authorization logging will go to the nifi-user.log. This is logging related to access to NiFi. Nifi-bootstrap.log has logging for you rNiFi bootstrap process. The bootstrap is what is lauched when you execute the nifi.sh start command. The bootstrap then starts the nifi main child process whcih loads your NiFi and dataflows. The bootstrap then monitors that child process to make sure it is still live (restarts it automatically if it dies). Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-22-2025
02:12 PM
1 Kudo
@Bern I am having difficulty clearly understanding your question. I will start by saying that Apache NIFi 1.11.4 was released way back in 2019 and will have many unresolved CVEs. I strongly encourage you to at least upgrade to the latest available NiFi 1.x release 1.28.1 as I know migrating to Apache NiFi 2.x versions takes a good amount of planning and likely some dataflow redesign work. I think first we need to get our terminology correct, so we can communicate clearly on the issue/question. NiFi processors are what you add to the canvas that perform specific tasks. Processors will have connections that allows you to connect a processor with another component (processor, input port, output port, funnel, etc). NiFi Reporting Tasks are added via the NiFi controller and perform their function in the background. Then you also have NiFi Controller Services which are services that are used by other components (processors for example). The SiteToSiteStatusReportingTask NiFi reporting task has been a part of Apache NiFi since 1.2.0 release, so it does exist in your 1.1.4 version. The screenshot you shared is showing a bunch of Controller Services, so you are in the wrong UI for adding a Reporting Task. You can find and add NiFi Reporting task by clicking on the NIFi Global menu in the upper right corner of the UI and selecting "Controller Settings" from the displayed menu: From the UI that appears, you will be able to select the "Reporting Tasks" tab: Click the box to the far right with the "+" symbol to bring up the UI for selecting the Reporting task you wish to add. NOTE: The list of available Reporting Tasks will vary by Apache NIFi release version. What I actually think you will want to use is the SiteToSiteBulletinReportingTask reporting task. You can use this Reporting task to send bulletins that your processors are producing to a NiFi remote Input Port. Your processors generate ERROR bulletins by default when issues occur, so you can build a dataflow that will process these bulletins send to it via this reporting task and do alerting as you need. For example: send an email using the putEmail processor to alert someone about specific errors. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-22-2025
06:37 AM
1 Kudo
@Kumar1243 Try using the following spec: [
{
"operation": "shift",
"spec": {
"Product": [
"Product",
"to_PlndIndepRqmtItem[0].Product"
],
"Plant": [
"Plant",
"to_PlndIndepRqmtItem[0].Plant"
],
"MRPArea": [
"MRPArea",
"to_PlndIndepRqmtItem[0].MRPArea"
],
"PlndIndepRqmtType": [
"PlndIndepRqmtType",
"to_PlndIndepRqmtItem[0].PlndIndepRqmtType"
],
"PlndIndepRqmtVersion": [
"PlndIndepRqmtVersion",
"to_PlndIndepRqmtItem[0].PlndIndepRqmtVersion"
],
"RequirementPlan": [
"RequirementPlan",
"to_PlndIndepRqmtItem[0].RequirementPlan"
],
"RequirementSegment": [
"RequirementSegment",
"to_PlndIndepRqmtItem[0].RequirementSegment"
],
"PlndIndepRqmtPeriod": [
"PlndIndepRqmtPeriod",
"to_PlndIndepRqmtItem[0].PlndIndepRqmtPeriod"
],
"PlndIndepRqmtIsActive": "PlndIndepRqmtIsActive",
"NoWithdrawal": "NoWithdrawal",
"DeleteOld": "DeleteOld",
"PeriodType": "to_PlndIndepRqmtItem[0].PeriodType",
"PlannedQuantity": "to_PlndIndepRqmtItem[0].PlannedQuantity",
"UnitOfMeasure": "to_PlndIndepRqmtItem[0].UnitOfMeasure",
"ProductionVersion": "to_PlndIndepRqmtItem[0].ProductionVersion"
}
}
] You can use the JoltTransformRecord or JoltTransformJson processors. The JoltTransfromRecord will allow you to define a schema for your multi-record input FlowFiles. The JoltTransformJson processor would require you to split you source FlowFile first so you have one record per FlowFile. Hope this helps you get closer to success,. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-22-2025
05:32 AM
@HoangNguyen As long as you are running a new enough version of Apache NiFi, you'll have an option with the process group configuration to set a logging suffix. For each process group you want a separate log file, create a unique suffix. In above example I used the suffix "extracted". In my NiFi "logs" directory, I now have a new "nifi-app-extracted.log" file that contains the logging output of every component contained within that process group. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-18-2025
08:26 AM
1 Kudo
@asand3r JVM Garbage collection is stop-the-world which would prevent for the duration of that GC event the Kafka clients from communicating with Kafka. If that duration of pause is long enough I could cause Kafka to do a rebalance. I can't say that you are experiencing that . Maybe put the consumeKafka processor class in INFO level logging and monitor the nifi-app.log for any indication of rebalance happening. When it comes GC pauses, a common mistake I see is individuals setting the JVM heap settings in NiFi way to high simply because the server on which they have in stalled NiFi has a lot of installed memory. Since GC only happens once the allocated JVM memory utilization reaches around 80%, large heaps could lead to long stop-the-world if there is a lot top clean-up to do. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more