Member since
08-09-2024
3
Posts
2
Kudos Received
0
Solutions
10-07-2025
07:15 AM
Hello there.. I am using replacetext processor where i am using the below query as my replacement value. INSERT into log_hive.log_ozone_retention_jobs (Table_Name,Process_Date, Process ,Status ,Source_Count ,Target_Count ,Start_Time ,End_Time ,Notes) select 'Customer_Voip',${date_list}, 'Insert_Into_Ozone','Insert Successful',${source_count},null,'${start_time}',current_timestamp(),null; date_list is an attribute value coming along with the incoming flowfile.. My downstream processor is PutHiveQL where it executes the insert query provided above.. I was successfully able to insert a single record for that day using the PutHiveQL processor.. However, today i found that PutHiveQL actually inserted 2 records for the same day as below... the incoming flowfile had date_list value as 20250401 for example and is expected to insert only 1 record but inserted 2 records with 2 different end time... this happened with all the dates that were meant to be loaded... PutHiveQL processor configuration.. Interestingly, I have used the same processor in the next process group and doing the same insert but for other purpose and as per expectations it has inserted only 1 record per date_list/process_date.. and just to inform that i am using the similar logic atleast 8-10 times within the nifi flow and is running correctly without any duplicated records.. Can somebody please let me know what is the issue or am i missing something ? Thank you!!
... View more
Labels:
- Labels:
-
Apache NiFi
08-11-2024
02:57 AM
2 Kudos
Thank you @SAMSAL .. appreciated the way of explanation.. generally i'l prefer to add/update the max.initialvalue.column property of QDT to restart fetching missed data as a one time modification but I would love to see NIFI using fault tolerant mechanism in future so that the developers don't have to find the workaround or have to recover the lost data manually by modifying the QDT
... View more
08-10-2024
12:15 AM
Hello there.. I am new to Nifi and have started using QueryDatabaseTable for incremental fetch. I want to know how to handle a situation as below. For example QDT processor fetched new data based on id column - id 50 to id 55 State of QDT processor (max value QDT processor processed/View State) - 55 Next Steps - ConvertAvrotoJson----->ConvertJsontoSQL ---->PutSQL Now what if the PutSQL processor failed for some reason ? Should not the QDT processor roll back the state of max value from 55 to 49 so that next time QDT processor executes it again starts fetching id from 50 ? When I checked the state of QDT processor it showed me 55 although the new records were not inserted in final table and when i execute the QDT processor again it fetched data from 56 causing data loss at the destination side.. Is there any way to let the QDT processor know that the downstream processors failed and it should now roll back the state to previous value before the current run ?
... View more
Labels:
- Labels:
-
Apache NiFi