Member since
07-30-2019
3392
Posts
1618
Kudos Received
1001
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 416 | 11-05-2025 11:01 AM | |
| 298 | 11-05-2025 08:01 AM | |
| 447 | 11-04-2025 10:16 AM | |
| 666 | 10-20-2025 06:29 AM | |
| 806 | 10-10-2025 08:03 AM |
09-25-2025
05:51 AM
@jame1997 Not enough information to say what is going on on your environment yet. Can you provide more details? What method of authentication are you using in your NiFi (single-user, ldap-provider, kerberos-provder, SAML, etc...) Assuming you are using a login based provider, are you seeing the NiFi login page and then successfully logging in? Do you ever see the NiFi UI canvas or immediately encounter this exception as soon as you login? Have you inspect the nifi-user.log and nifi-app.log on every node at time of attempted login to see which node is reporting the authentication success and which node is reporting the shared exception? If you do successfully access the NiFi canvas, how long is your access lasting before you encounter the exception? Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-24-2025
01:51 PM
@carange I sent you a DM with next steps to help you resolve this issue, thanks!
... View more
09-23-2025
06:43 PM
Hello @MattWho , Thank you so much. Very clear details and usefuls
... View more
09-23-2025
11:18 AM
Back to this, again very very grateful, I made the suggested changes regarding the definition of the s2s urls, Yes, the nodes resolve well by node-1, node-2, node-3, the issue with the s2s messages is that they reach the log and fill up the disk, I have budget limitations in technology, I can not allow it to increase, then my nifi falls by disk, therefore I have maintenance shells that run every day to maintain disk usage, I checked and indeed the 2 controllers, the s2sbolletin and the s2s status, work for me, they will remain in different tables, I made the changes, I was confused with the formats regarding one with the other, thank you very much Matt
... View more
09-23-2025
06:03 AM
I opened a new message because the initial context of this thread already supported me with its solution and this would be another query where I attached the properties and the flow. https://community.cloudera.com/t5/Support-Questions/Site-to-Site-Status-Reporting-Task-Error-Notification-Issue/td-p/412426
... View more
09-22-2025
06:37 AM
1 Kudo
@Kumar1243 Try using the following spec: [
{
"operation": "shift",
"spec": {
"Product": [
"Product",
"to_PlndIndepRqmtItem[0].Product"
],
"Plant": [
"Plant",
"to_PlndIndepRqmtItem[0].Plant"
],
"MRPArea": [
"MRPArea",
"to_PlndIndepRqmtItem[0].MRPArea"
],
"PlndIndepRqmtType": [
"PlndIndepRqmtType",
"to_PlndIndepRqmtItem[0].PlndIndepRqmtType"
],
"PlndIndepRqmtVersion": [
"PlndIndepRqmtVersion",
"to_PlndIndepRqmtItem[0].PlndIndepRqmtVersion"
],
"RequirementPlan": [
"RequirementPlan",
"to_PlndIndepRqmtItem[0].RequirementPlan"
],
"RequirementSegment": [
"RequirementSegment",
"to_PlndIndepRqmtItem[0].RequirementSegment"
],
"PlndIndepRqmtPeriod": [
"PlndIndepRqmtPeriod",
"to_PlndIndepRqmtItem[0].PlndIndepRqmtPeriod"
],
"PlndIndepRqmtIsActive": "PlndIndepRqmtIsActive",
"NoWithdrawal": "NoWithdrawal",
"DeleteOld": "DeleteOld",
"PeriodType": "to_PlndIndepRqmtItem[0].PeriodType",
"PlannedQuantity": "to_PlndIndepRqmtItem[0].PlannedQuantity",
"UnitOfMeasure": "to_PlndIndepRqmtItem[0].UnitOfMeasure",
"ProductionVersion": "to_PlndIndepRqmtItem[0].ProductionVersion"
}
}
] You can use the JoltTransformRecord or JoltTransformJson processors. The JoltTransfromRecord will allow you to define a schema for your multi-record input FlowFiles. The JoltTransformJson processor would require you to split you source FlowFile first so you have one record per FlowFile. Hope this helps you get closer to success,. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-18-2025
08:26 AM
1 Kudo
@asand3r JVM Garbage collection is stop-the-world which would prevent for the duration of that GC event the Kafka clients from communicating with Kafka. If that duration of pause is long enough I could cause Kafka to do a rebalance. I can't say that you are experiencing that . Maybe put the consumeKafka processor class in INFO level logging and monitor the nifi-app.log for any indication of rebalance happening. When it comes GC pauses, a common mistake I see is individuals setting the JVM heap settings in NiFi way to high simply because the server on which they have in stalled NiFi has a lot of installed memory. Since GC only happens once the allocated JVM memory utilization reaches around 80%, large heaps could lead to long stop-the-world if there is a lot top clean-up to do. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-16-2025
11:51 AM
1 Kudo
@AlokKumar Then you'll want to build your dataflow around the HandleHTTPRequest and HandleHTTPResponse processors. You build your processing between those two processors or maybe you have multiple HandleHTTPResponse processors to control the response to the request based on the outcome of your processing. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-15-2025
10:55 AM
Thank you for replying, that's the exact solution I eventually settled on. Best, Shelly
... View more
09-12-2025
11:41 AM
@Alexm__ While i have never done anything myself with Azure DevOps pipelines, I don't see why this would not be possible. Dev, test, prod environments would likely have slight variations in NiFi configurations (source and target service URLs, passwords/usernames, etc). So when designing your Process Group dataflows you'll want to take that into account and utilize NiFi's Parameter contexts to define such variable value configuration properties. Sensitive properties (passwords) are never passed to NiFi-Registry. So any version controlled PG imported to another NiFi will not have the passwords set. Once you version control that PG, you can deploy it through rest-api calls to other NiFi deployments. First time it is deployed it will simply import the parameter context used in source (dev) environment. You would need to modify that parameter context in test, and prod environments to set passwords and alter any other parameters as needed by each unique env. Once the modified parameter context of same name exists in the other environments, promoting new versions of dataflows that use that parameter context becomes very easy. The updated dataflows will continue to use the local env parameter context values rather then those used in dev. If a new parameter is introduced to the parameter context, is simply gets added to the existing parameter context of the same name in test and prod envs. So there will be some consideration in your automated promotion of version controlled dataflows between environments to consider. Versioning a DataFlow Parameters in Versioned Flows Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more