Member since
05-22-2024
7
Posts
0
Kudos Received
0
Solutions
04-08-2026
11:38 PM
This happens in version 1.23.2. In the PutSql processor, I have the following command: UPDATE tbl SET status = 'proceed', startDate = GETDATE() WHERE messageId = ${messageId} The first retry attempt no longer had this attribute. When I look at DataProvanance, I see the following next to this record in PutSql: SEND CLONE CLONE CLONE DONE - as if, each time I retry, PutSql cloned the flow file, which no longer has the formatDateTo and formatDataFrom attributes.
... View more
04-08-2026
06:09 AM
I mean that if the PutSql processor is unable to change the status, e.g. due to the database being unavailable, and the flow goes to the retry path to retry the process, then the attributes I created earlier in UpadetAttribute disappear from the process.This is my values in PutSql processor: Support Fragmented Transactions false Database Session AutoCommit false Transaction Timeout 30 sec Batch Size 50 Obtain Generated Keys false Rollback On Failure false
... View more
04-08-2026
01:39 AM
Hi,
I'm having a problem with a NiFi flow. I use UpdateAttrubute processor when I create an attribute like this:
formatDateFrom ${dateFrom:format("yyyy-MM-dd")}
After this I have a PutSql processor that changes a status in DB. When everything is fine, it is ok. However, when something is wrong with the change, this status and my flow go to retry this attribute formatDateFrom disappears and is no longer present in the flow. Why does this happen and how can I secure this attribute? Attributes created with EvalueateJsonPath didn't disappear, only those created with UpdateAttribute.
... View more
Labels:
- Labels:
-
Apache NiFi
10-17-2025
12:45 PM
Hi! I have flows on NIFi and in many places, I have to save data to the database. I mainly use PutSql or PutdatabaseRacord processors. So far, everything was working fine, but today we wanted to test our data and issued 15 messages at once. Unfortunately, in several places, during data saving or updating, an error appeared - error.code 1205 error.message Transaction (Process ID 76) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. error.sql.state 40001 - after which some data was saved or updated correctly, others unfortunately were not. How can I resolve this situation, and how can I protect myself against something like this?
... View more
Labels:
- Labels:
-
Apache NiFi
02-10-2025
10:08 AM
Hi again, I managed how to split records into individual records thanks to JOLT like this: [ { "operation": "shift", "spec": { "records": { "*": { "@(2,messageId)": "[&1].messageId", "@(2,markerId)": "[&1].markerId", "@(2,dateFrom)": "[&1].dateFrom", "@(2,dateTo)": "[&1].dateTo", "recordId": "[&1].recordId", "account": "[&1].account", "data": { "email": "[&2].email", "firstName": "[&2].firstName", "lastName": "[&2].lastName" }, "city": "[&1].city" } } } } ] Now my output is like this: [ { "messageId" : 1234, "markerId" : "T", "dateFrom" : 6436058131202690000, "dateTo" : -3840351829778683400, "recordId" : 1, "account" : "152739203233" }, { "messageId" : 1234, "markerId" : "T", "dateFrom" : 6436058131202690000, "dateTo" : -3840351829778683400, "recordId" : 2, "email" : "jsmith@gmail.com", "firstName" : "John", "lastName" : "Smith" }, { "messageId" : 1234, "markerId" : "T", "dateFrom" : 6436058131202690000, "dateTo" : -3840351829778683400, "recordId" : 3, "city" : "Los Angeles" }, { "messageId" : 1234, "markerId" : "T", "dateFrom" : 6436058131202690000, "dateTo" : -3840351829778683400, "recordId" : 4 }, { "messageId" : 1234, "markerId" : "T", "dateFrom" : 6436058131202690000, "dateTo" : -3840351829778683400, "recordId" : 5 }, { "messageId" : 1234, "markerId" : "T", "dateFrom" : 6436058131202690000, "dateTo" : -3840351829778683400, "recordId" : 6, "account" : "6789189790191" }, { "messageId" : 1234, "markerId" : "T", "dateFrom" : 6436058131202690000, "dateTo" : -3840351829778683400, "recordId" : 7, "city" : "San Fransisco" } ] But still I dont now how to remove/filter records which have idNumber and accountNumber fields(in this case records 4,5,6). Someone can help me?
... View more
02-10-2025
03:21 AM
Hi i have json input like that: {
"messageId": 1234,
"markerId": "T",
"dateFrom": 6436058131202690000,
"dateTo": -3840351829778683400,
"records": [
{
"recordId": 1,
"account": "152739203233"
},
{
"recordId": 2,
"data": {
"email": "jsmith@gmail.com",
"firstName": "John",
"lastName": "Smith"
}
},
{
"recordId": 3,
"city": "Los Angeles"
},
{
"recordId": 4,
"idNumber": "12345"
},
{
"recordId": 5,
"accountNumber": "55671"
},
{
"recordId": 6,
"account": "6789189790191"
},
{
"recordId": 7,
"city": "San Fransisco"
}
]
} And I would like to have output like that: [ {
"messageId" : 1234,
"markerId" : "T"
"dateFrom" : 6436058131202690000,
"dateTo" : -3840351829778683400,
"recordId" : 1,
"account": "152739203233"
}, {
"messageId" : 1234,
"markerId" : "T"
"dateFrom" : 6436058131202690000,
"dateTo" : -3840351829778683400,
"recordId" : 2,
"email": "jsmith@gmail.com",
"firstName": "John",
"lastName": "Smith"
}, {
"messageId" : 1234,
"markerId" : "T"
"dateFrom" : 6436058131202690000,
"dateTo" : -3840351829778683400,
"recordId" : 3,
"city": "Los Angeles"
}, {
"messageId" : 1234,
"markerId" : "T"
"dateFrom" : 6436058131202690000,
"dateTo" : -3840351829778683400,
"recordId" : 6,
"account": "6789189790191"
}, {
"messageId" : 1234,
"markerId" : "T"
"dateFrom" : 6436058131202690000,
"dateTo" : -3840351829778683400,
"recordId" : 7,
"city": "San Fransisco"
} ] i need to have only this record with account, data(email,firstName,lastName) or city, i dont need records with idNumber and accountNumber. Additionally, I need each record to have a common part: messageId, markerId, dateFrom and dateTo Is it possible to do something like that with jolt transformation?
... View more
Labels:
- Labels:
-
Apache NiFi
05-22-2024
10:16 AM
Hi i have problem with NIFI-I download json records from Kafka, I split to have single records-here i have round robin strategy, I add attributes, then I give the id attribute and divide the flow into two paths, on one I do some actions with invokes http to reach some data and transform it to attributes, I add attributes, clear the context and I would like to merge it back to keep these jsons from one paths and attributes on the other. I use the merge content processor for this, merging using the id attribute that I assigned at the beginning. And now I have a problem because it works, but in the case of a larger number of records, e.g. 200, it doesn't because I have an error BIN_MANAGER_FULL. Minimum Number of Entries is 2 and Maximum Number of Entries is 2, while Maximum number of Bins is 10 and I would not like to increase it. Is there any way to bypass this and make it work for a larger number of records? For example, apply a limit so that it merges only when there is space or divede content and wait? I have no idea how to reslove this problem.
... View more
Labels:
- Labels:
-
Apache NiFi