Member since
05-22-2024
7
Posts
0
Kudos Received
0
Solutions
04-10-2026
07:00 AM
@donaldo71 I have not been able to identify a known issue that aligns with the description you have shared. That is an interesting sequence of events on a single FlowFile (SEND followed by clones). Can you share the "Relationships" configuration of your putSQL processor? Make sure you have not checked the "retry" box on the "success" relationship. Something you might want to try to see if same issue persists is to check the box for "retry" on the retry relationship. This allows the original FlowFile to remain in the inbound connection up to the configured number of retry attempts (default 10) before being routed to retry relationship. I'd be curious of your observations post the above configuration change. Would you be willing to download the your flow definition json for this dataflow and share it? This is your full "SQL Statement" set in your ptSQL processor? UPDATE tbl SET status = 'proceed', startDate = GETDATE() WHERE messageId = ${messageId} Where are your utilizing those two attributes that go missing? can you share your UpdateAttribute processor configuration? Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
10-18-2025
01:37 PM
1 Kudo
Hello @donaldo71, This looks like SQL is getting the deadlock because of the many records in once. You can try couple of this. First, enable the retry option on the PutSql or PutdatabaseRacord processors. If the retries helped previously, this could also help in your case. Also, decrease the concurrency and batch sizes to try to decrease the SQL load. Additionally, on the SQL side, if you can, use row versioning isolation to reduce locking: ALTER DATABASE DBNAME SET READ_COMMITTED_SNAPSHOT ON;
... View more
02-10-2025
10:08 AM
Hi again, I managed how to split records into individual records thanks to JOLT like this: [ { "operation": "shift", "spec": { "records": { "*": { "@(2,messageId)": "[&1].messageId", "@(2,markerId)": "[&1].markerId", "@(2,dateFrom)": "[&1].dateFrom", "@(2,dateTo)": "[&1].dateTo", "recordId": "[&1].recordId", "account": "[&1].account", "data": { "email": "[&2].email", "firstName": "[&2].firstName", "lastName": "[&2].lastName" }, "city": "[&1].city" } } } } ] Now my output is like this: [ { "messageId" : 1234, "markerId" : "T", "dateFrom" : 6436058131202690000, "dateTo" : -3840351829778683400, "recordId" : 1, "account" : "152739203233" }, { "messageId" : 1234, "markerId" : "T", "dateFrom" : 6436058131202690000, "dateTo" : -3840351829778683400, "recordId" : 2, "email" : "jsmith@gmail.com", "firstName" : "John", "lastName" : "Smith" }, { "messageId" : 1234, "markerId" : "T", "dateFrom" : 6436058131202690000, "dateTo" : -3840351829778683400, "recordId" : 3, "city" : "Los Angeles" }, { "messageId" : 1234, "markerId" : "T", "dateFrom" : 6436058131202690000, "dateTo" : -3840351829778683400, "recordId" : 4 }, { "messageId" : 1234, "markerId" : "T", "dateFrom" : 6436058131202690000, "dateTo" : -3840351829778683400, "recordId" : 5 }, { "messageId" : 1234, "markerId" : "T", "dateFrom" : 6436058131202690000, "dateTo" : -3840351829778683400, "recordId" : 6, "account" : "6789189790191" }, { "messageId" : 1234, "markerId" : "T", "dateFrom" : 6436058131202690000, "dateTo" : -3840351829778683400, "recordId" : 7, "city" : "San Fransisco" } ] But still I dont now how to remove/filter records which have idNumber and accountNumber fields(in this case records 4,5,6). Someone can help me?
... View more
05-23-2024
02:35 PM
@donaldo71 Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more