Member since
02-01-2022
281
Posts
103
Kudos Received
60
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1120 | 05-15-2025 05:45 AM | |
| 4948 | 06-12-2024 06:43 AM | |
| 7920 | 04-12-2024 06:05 AM | |
| 5823 | 12-07-2023 04:50 AM | |
| 3203 | 12-05-2023 06:22 AM |
11-28-2023
05:47 AM
@joseomjr Is on to the right solution here. Your regex statement should match "kafka.topic" not "${kafka.topic}". A quick test in regex101.com confirms "kafka\.topic" should match.
... View more
11-28-2023
05:35 AM
@raj_dev This may be a better question to the OSS NiFi Community. There is a mailing list and Slack (chat) you can find here: https://nifi.apache.org/mailing_lists.html At Cloudera we provide our own branded NIFI UI. You would need to do similar. This is a very complicated task that requires that you build your own binaries w/ the modifications to the UI. Before you go so deep as to build a new UI w/ your logo, be sure to have a look at Nifi of the Future, where you deploy nifi flows as a function in any cloud provider, or nifi on kubernetes. In this future state no one sees or touches the nifi UI. You can find more about Cloudera DataFlow here: https://docs.cloudera.com/dataflow/cloud/index.html
... View more
11-15-2023
05:36 AM
1 Kudo
In regards to your nifi expression language.: Test in a simple flow, and inspect the flowfile's attributes to insure topicName is correct. Then take it to the kafka processor. If you are not seeing the results of the expression, something isnt right, or the value doesnt except Expression Language. Look at the (?) on any property to see what is accepted. I suspect the expression language is invalid as well, so make sure you test and confirm the attribute is as expected before trying to use it deeper in your flow. In regards to mapping. What i suggest could be DistributedMapCache or even a flow that does a lookup against some other service. With this concept you provide a mapping key value pairs that correspond to your consumer and producer topics. When you lookup a key, For Example: "alerts_Consumer_Topic_Name" the value would be "alerts_Producer_Topic_Name". If this is stored outside of nifi, then these values can be managed and changed outside of the scope of the nifi flow. Example flow with DistributeMapCache: https://github.com/ds-steven-matison/NiFi-Templates/blob/main/DistributedCache_Demo.xml
... View more
11-13-2023
07:15 AM
@Arash Did you inspect the flowfile (assure its the expected format, etc) and inspect the flowfile attributes (may be more detail into the conflict). Additionally, you can set the processor log level to DEBUG and/or monitor the nifi-app.log for more details. If this worked before, and now doesnt, i would expect something to have changed in the flowfile.
... View more
11-13-2023
07:03 AM
Sometimes I need to increase the timeouts in the processor configurations. You may also need to reduce the execution time for the processor incase the endpoint cannot handle too many requests at once.
... View more
11-08-2023
07:36 AM
@Rohit1997jio If your endpoint test works with Postman, and your invokeHttp is setup similary, the above error suggest the nifi node cannot connect to the endpoint. You would need to ensure you have network connectivity from nifi host to endpoint.
... View more
11-08-2023
07:33 AM
@Rohit1997jio I do not think this is possible. You would need a method outside of the consume/produce that handles logic for which consume topic maps to which produce topic. Then you could use a dynamic topic name in the producer. However, you would still be limited in fact that ConsumeKafka doesnt take upstream connections. In the example above, if customerTopicX is attribute based, you can just use the same attribute logic in topic Name for a single publishKafka versus three seen above. That would atleast clean up your flow.
... View more
10-23-2023
11:44 AM
@MWM @cotopaul If you get the record reader/writer using the schema(s) you want, you do not have to do any magic to convert values, it should just work. Only use, inferSchema long enough to get the structure when you have none. Then copy/paste it and use it as @cotopaul has described in place of InferSchema. You can also use Schema Registry. Make the edits you need to satisfy reader (upstream), writer (downstream) as they are sometimes needing minor adjustments like in this case.
... View more
09-06-2023
08:00 AM
@manishg You should only copy individual nars that you know you need, not ALL 1.10 nars. You error suggest a conflict with 1 or more. That being said, be super careful with the expectation that things from 1.10 will work in 1.22. Each of them would to be tested individually for compatibility in 1.22.x.
... View more
08-28-2023
05:29 AM
@JohnnyRocks ReplaceText more than once is something you want to avoid entirely. You need to look at how to solve the schema concerns within the record based processors. It should be possible to avoid ReplaceText all together. If your upstream data is that different (3 different formats) within the same pipeline, consider how to address that upstream or in separate nifi flows. Alternatively multiple pipelines can be built with separate top branch that pipes into the same record based processor. This would be something like 3 single routes through a ReplaceText then all going to ConvertRecord. However i would still try to optimize without ReplaceText in the manner described here.
... View more