Member since
07-30-2019
3386
Posts
1617
Kudos Received
998
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 336 | 10-20-2025 06:29 AM | |
| 476 | 10-10-2025 08:03 AM | |
| 343 | 10-08-2025 10:52 AM | |
| 369 | 10-08-2025 10:36 AM | |
| 400 | 10-03-2025 06:04 AM |
09-23-2025
06:03 AM
I opened a new message because the initial context of this thread already supported me with its solution and this would be another query where I attached the properties and the flow. https://community.cloudera.com/t5/Support-Questions/Site-to-Site-Status-Reporting-Task-Error-Notification-Issue/td-p/412426
... View more
09-22-2025
06:37 AM
1 Kudo
@Kumar1243 Try using the following spec: [
{
"operation": "shift",
"spec": {
"Product": [
"Product",
"to_PlndIndepRqmtItem[0].Product"
],
"Plant": [
"Plant",
"to_PlndIndepRqmtItem[0].Plant"
],
"MRPArea": [
"MRPArea",
"to_PlndIndepRqmtItem[0].MRPArea"
],
"PlndIndepRqmtType": [
"PlndIndepRqmtType",
"to_PlndIndepRqmtItem[0].PlndIndepRqmtType"
],
"PlndIndepRqmtVersion": [
"PlndIndepRqmtVersion",
"to_PlndIndepRqmtItem[0].PlndIndepRqmtVersion"
],
"RequirementPlan": [
"RequirementPlan",
"to_PlndIndepRqmtItem[0].RequirementPlan"
],
"RequirementSegment": [
"RequirementSegment",
"to_PlndIndepRqmtItem[0].RequirementSegment"
],
"PlndIndepRqmtPeriod": [
"PlndIndepRqmtPeriod",
"to_PlndIndepRqmtItem[0].PlndIndepRqmtPeriod"
],
"PlndIndepRqmtIsActive": "PlndIndepRqmtIsActive",
"NoWithdrawal": "NoWithdrawal",
"DeleteOld": "DeleteOld",
"PeriodType": "to_PlndIndepRqmtItem[0].PeriodType",
"PlannedQuantity": "to_PlndIndepRqmtItem[0].PlannedQuantity",
"UnitOfMeasure": "to_PlndIndepRqmtItem[0].UnitOfMeasure",
"ProductionVersion": "to_PlndIndepRqmtItem[0].ProductionVersion"
}
}
] You can use the JoltTransformRecord or JoltTransformJson processors. The JoltTransfromRecord will allow you to define a schema for your multi-record input FlowFiles. The JoltTransformJson processor would require you to split you source FlowFile first so you have one record per FlowFile. Hope this helps you get closer to success,. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-18-2025
08:26 AM
1 Kudo
@asand3r JVM Garbage collection is stop-the-world which would prevent for the duration of that GC event the Kafka clients from communicating with Kafka. If that duration of pause is long enough I could cause Kafka to do a rebalance. I can't say that you are experiencing that . Maybe put the consumeKafka processor class in INFO level logging and monitor the nifi-app.log for any indication of rebalance happening. When it comes GC pauses, a common mistake I see is individuals setting the JVM heap settings in NiFi way to high simply because the server on which they have in stalled NiFi has a lot of installed memory. Since GC only happens once the allocated JVM memory utilization reaches around 80%, large heaps could lead to long stop-the-world if there is a lot top clean-up to do. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-16-2025
11:51 AM
1 Kudo
@AlokKumar Then you'll want to build your dataflow around the HandleHTTPRequest and HandleHTTPResponse processors. You build your processing between those two processors or maybe you have multiple HandleHTTPResponse processors to control the response to the request based on the outcome of your processing. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-15-2025
10:55 AM
Thank you for replying, that's the exact solution I eventually settled on. Best, Shelly
... View more
09-12-2025
11:41 AM
@Alexm__ While i have never done anything myself with Azure DevOps pipelines, I don't see why this would not be possible. Dev, test, prod environments would likely have slight variations in NiFi configurations (source and target service URLs, passwords/usernames, etc). So when designing your Process Group dataflows you'll want to take that into account and utilize NiFi's Parameter contexts to define such variable value configuration properties. Sensitive properties (passwords) are never passed to NiFi-Registry. So any version controlled PG imported to another NiFi will not have the passwords set. Once you version control that PG, you can deploy it through rest-api calls to other NiFi deployments. First time it is deployed it will simply import the parameter context used in source (dev) environment. You would need to modify that parameter context in test, and prod environments to set passwords and alter any other parameters as needed by each unique env. Once the modified parameter context of same name exists in the other environments, promoting new versions of dataflows that use that parameter context becomes very easy. The updated dataflows will continue to use the local env parameter context values rather then those used in dev. If a new parameter is introduced to the parameter context, is simply gets added to the existing parameter context of the same name in test and prod envs. So there will be some consideration in your automated promotion of version controlled dataflows between environments to consider. Versioning a DataFlow Parameters in Versioned Flows Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-10-2025
11:44 PM
@MattWho We were able to resolve it there was a typo in our api call. Thanks for your suggestion.
... View more
09-08-2025
05:41 AM
@yoonli It would be helpful if you shared the complete authorization exception you are encountering. I have a feeling your authorization exception is not related to your server certificate, but more related to your individual NiFi user. Using a load balancer in front of your NiFi cluster would require that session affinity (sticky sessions) is enabled in your load balancer. The why? Any login based user authentication (ldap-provider, kerberos-provdier, etc) result in a token being issued to the user and a server side token stored on the NiFi server that issues the client token. Only the specific node in the NiFi cluster that issued the client bearer token will have the corresponding server side token. If your load balancer does not have sticky sessions enabled subsequent requests after obtaining the client bearer token may get direct to a different node in the cluster. Your browser will include this client token in all subsequent request to NiFi Since the other nodes will not hav the corresponding server token for your user the session would result in an not authorized response. Possible helpful HAProxy links: https://www.haproxy.com/blog/enable-sticky-sessions-in-haproxy https://www.haproxy.com/solutions/load-balancing ---- Certificate based authentication is not an issue since the client/server MutualTLS exchange happens in every communication between client and server. This is why is suspect that your setup involves a login based authentication method. ---- I see you configured your LB IP in the nifi.web.proxy.host property within the nifi.properties file. This property has nothing directly related to client/user authentication. It is about making sure NiFi accepts requests destined for a different hostname/IP then the destination host that received it. Let's say you initiate a connection to URL containing host: https://10.29.144.56/nifi/ Your HAProxy then routes that request to NiFi on host 10.29.144.58 which returns a server certificate with that servers hostname or the IP 10.29.144.58. The connection is going to be blocked because it appears as a man-in-the-middle attack. The expectation was that the request would be processed by the server 10.29.144.56; however, host 10.29.144.58 received the request. By adding 10.29.144.56 to the proxy.host property in NiFi, you are telling NiFi to accept requests intended for a different hostname or IP then the actual NiFi's hostname or IP. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-05-2025
05:16 AM
1 Kudo
@yoonli This thread is growing in to multiple queries that are not directly related. Please start a new community question so the information is easier for our community members to follow when they have similar issues. Thank you, Matt
... View more
09-05-2025
02:24 AM
Thank you @MattWho, I’m currently using nifi-atlassian-nar-2.5.0-SNAPSHOT.nar, even though my NiFi version is 2.4.0.
... View more