Member since
11-03-2023
32
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3909 | 11-08-2023 09:29 AM |
10-14-2025
12:00 AM
Ok , so can i make a flow like ConsumeKafkaRecord --> topic A --> PublishKafkaRecord --> topic B , using both record processor for consuming and publishing data ? Will this be fast ?
... View more
10-13-2025
07:19 AM
Ok , so can i make a flow like ConsumeKafkaRecord --> topic A --> PublishKafkaRecord --> topic B , using both record processor for consuming and publishing data ? Will this be fast ?
... View more
10-10-2025
05:03 AM
Hi , This is my current NIFI flow where i am consuming data in NIFI using ConsumeKafka_1_0 and publishing using PublishKafka_1_0 . Now HiveMQ (MQTT) → Kafka Topic A → NiFi consumes → NiFi publishes → Kafka Topic B My requirement is send data in real time , i want to do real time data streams like kafka streams . How to achieve that using NIFI . Is kafka streams available in NIFI ? iam not doing and data transformation or any other operation to data. Iam just simply consuming and publishing data , but now i want to do it in real time .
... View more
Labels:
- Labels:
-
Apache NiFi
07-03-2025
05:29 AM
Hi Matt , Quartz Cron scheduler method worked , but one issue is coming related to time at which i want data to get consumed , i used 0-30 */6 * * * ? cron query , as per this i was expecting data to get consumed after 6 minutes or around 6 minutes . Data published was 50K , but all data got consumed before 3 minutes around 2 mins 50 sec . How to make this time configure accurately . Please help on this
... View more
07-01-2025
11:51 PM
Hi , My requirement is to consume all records from kafka topic (e.g. 1 lakh) in one go by using ConsumeKafkaRecord processor which is scheduled to run once 4 hours as i need to merge all data and create a parquet file and store in HDFS path using putHDFS processor . if i consume all incoming data and then use a merge content processor and run it once in 4 hours then issue comes is all data stays in nifi queued for 4 hours taking memory , so for avoiding that i wanted to use ConsumeKafkaRecord with scheduler (4 hrs) but its not able to consume all record . How to solve this issue ?
... View more
- Tags:
- NiFi
Labels:
- Labels:
-
Apache NiFi
09-23-2024
01:16 AM
1 Kudo
Hi @MattWho Thanks for your response . I also don't understand the overhead of ingesting the same messages twice in your NiFi My requirement is to send data to different end point , so that they can perform different operations on the data. Why not have have a single ConsumeKafka ingesting the messages from the topic and then routing the success relationship from the ConsumeKafka twice (once to InvokeHTTP A and once to InvokeHTTP B)? For me one flow is like one vendor , like this i will be having multiple number of vendors , everyone will have there separate end points. Keeping all in one flow is not possible . So I am creating separate data flow and separate retry logic for them. This above issue is with only 1 vendor , they require same data (consumed from same kafka topic ) to be pushed to 2 separate endpoints . But I am not able to handle the retry logic for them. Why publish failed or retry FlowFile messages to an external topic R just so they can be consumed back in to your NiFi? yes i want them to be consumed again to nifi . All failed requests iam publishing to retry topic and this is being handled in retry flow . With this iam able to keep my main flow without and failed requests and new requests which does not have any error will get pushed to end point successfully It would be more efficient to just keep them in NiFi and create a retry loop on each InvokeHTTP. NiFi even offers retry handling directly on the relationships with in the processor configuration if i add a retry loop to invoke http and the endpoint is down for a longer time , too many requests will get queued in nifi . If you must write the message out to just one topic R, you'll need to append something to the message that indicates what InvokeHTTP (A or B) failure or retry resulted in it being written to Topic R. Then have a single Retry dataflow that consume from Topic R, extracts that A or B identifier from message so that it can be routed to the correct invokeHTTP. Just seems like a lot of unnecessary overhead. Please help me with the retry logic . Data is going in same retry topic how can i differentiate between the data , whether it failed from data flow 1 or from data flow 2.
... View more
08-27-2024
05:52 AM
Hi , In My nifi flow i have two dataFlows i want to same same data to two different HTTP endPoints , for both data flow data is being consumed from same kaflka topic (TOPIC A) both flows has different kafka groupID and pushing data to different HPPT end points (endPoint 1 and endPoint 2). If there is any failure data is being produced into retryTopic R which is common for both data flow . There is separate retryData flow for both configured with seperate HttpEndPoints , but both consuming from same retryTopic . What issue coming is , due to same retry topic , dataFlow A can consume data of dataFlowB . How should i avoid this situation ? retryTopic used should be same for both. Here is a sample for the dataFlow .
... View more
Labels:
- Labels:
-
Apache NiFi
12-27-2023
10:46 PM
My "error" topic is on different Kafka cluster . So i wanted my data which is failing due to connection error should move to failure and then to error topic . Is there any way to do so ??
... View more
12-26-2023
09:05 AM
In my flow iam publishing data to kafka topic , if there is failure , that request should go to failure relation where iam publishing that data to error topic , but in my case when there is not connectivity with kafka or kafka broker value is not correct request is getting queue in PublishKafka processor not moving to failure relation . This queue will effect my other requests any even my error handling logic .Please suggest some solution Attaching the image of error and my nifi flow-
... View more
Labels:
- Labels:
-
Apache NiFi
12-21-2023
10:40 PM
nifi version iam using is old Powered by Apache NiFi - Version 1.8.0.3.3.0.0-165 , i have developed the same solution using retryFlowFile processor , but its working in my other environment due to old nifi version , so i need to have same implementation in that too . @MattWho even retry relation is not there in this version. i was trying to achieve this by expression language , but that is also not working.
... View more