Member since
02-01-2022
270
Posts
96
Kudos Received
59
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2143 | 06-12-2024 06:43 AM | |
3230 | 04-12-2024 06:05 AM | |
2187 | 12-07-2023 04:50 AM | |
1311 | 12-05-2023 06:22 AM | |
2228 | 11-28-2023 10:54 AM |
11-13-2023
07:15 AM
@Arash Did you inspect the flowfile (assure its the expected format, etc) and inspect the flowfile attributes (may be more detail into the conflict). Additionally, you can set the processor log level to DEBUG and/or monitor the nifi-app.log for more details. If this worked before, and now doesnt, i would expect something to have changed in the flowfile.
... View more
11-13-2023
07:03 AM
Sometimes I need to increase the timeouts in the processor configurations. You may also need to reduce the execution time for the processor incase the endpoint cannot handle too many requests at once.
... View more
11-08-2023
07:42 AM
@Arash This error is indicating an issue with your SSL Context Service "SSL Service for elasticsearch cluster". If this was previously working, perhaps the SSL cert has changed? You should check if the elasticsearch cert has been renewed, and if so, update the keystores/truststores accordingly and test again.
... View more
11-08-2023
07:36 AM
@Rohit1997jio If your endpoint test works with Postman, and your invokeHttp is setup similary, the above error suggest the nifi node cannot connect to the endpoint. You would need to ensure you have network connectivity from nifi host to endpoint.
... View more
11-08-2023
07:33 AM
@Rohit1997jio I do not think this is possible. You would need a method outside of the consume/produce that handles logic for which consume topic maps to which produce topic. Then you could use a dynamic topic name in the producer. However, you would still be limited in fact that ConsumeKafka doesnt take upstream connections. In the example above, if customerTopicX is attribute based, you can just use the same attribute logic in topic Name for a single publishKafka versus three seen above. That would atleast clean up your flow.
... View more
10-23-2023
11:44 AM
@MWM @cotopaul If you get the record reader/writer using the schema(s) you want, you do not have to do any magic to convert values, it should just work. Only use, inferSchema long enough to get the structure when you have none. Then copy/paste it and use it as @cotopaul has described in place of InferSchema. You can also use Schema Registry. Make the edits you need to satisfy reader (upstream), writer (downstream) as they are sometimes needing minor adjustments like in this case.
... View more
10-03-2023
08:27 AM
1 Kudo
@MWM Before sendEmail you need to add a DetectDuplicate processor. https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.12.1/org.apache.nifi.processors.standard.DetectDuplicate/ You can find a sample template here: https://github.com/steven-matison/NiFi-Templates/blob/master/DetectDuplicate_DistributedMapCache_Demo.xml
... View more
09-06-2023
08:04 AM
@Kiranq Why are you using ExecuteScript? You can setup a DBCP (DataBaseConnectionPool) controller service with your sql connection and driver file. Make sure that jdbc driver is found on all nifi hosts. Then, you are able to use any processors that reference a DBCP Controller Service. For example: ExecuteSql.
... View more
09-06-2023
08:00 AM
@manishg You should only copy individual nars that you know you need, not ALL 1.10 nars. You error suggest a conflict with 1 or more. That being said, be super careful with the expectation that things from 1.10 will work in 1.22. Each of them would to be tested individually for compatibility in 1.22.x.
... View more
08-28-2023
05:29 AM
@JohnnyRocks ReplaceText more than once is something you want to avoid entirely. You need to look at how to solve the schema concerns within the record based processors. It should be possible to avoid ReplaceText all together. If your upstream data is that different (3 different formats) within the same pipeline, consider how to address that upstream or in separate nifi flows. Alternatively multiple pipelines can be built with separate top branch that pipes into the same record based processor. This would be something like 3 single routes through a ReplaceText then all going to ConvertRecord. However i would still try to optimize without ReplaceText in the manner described here.
... View more