Member since
03-29-2023
52
Posts
32
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
831 | 04-05-2024 12:26 PM | |
972 | 03-05-2024 10:53 AM | |
8847 | 04-03-2023 12:57 AM |
04-01-2024
07:08 PM
1 Kudo
Hi @mohdriyaz, You can achive this by using following steps. Step1: In GenertateFlow I took your input Step2: Use EvaluateJsonPath and set the Destination as Flowfile-attribute and write the JSON Path Step3: Use UpdateAttribute in case of further modification. output: ------------ If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped.
... View more
04-01-2024
12:30 AM
1 Kudo
Hi @Dataengineer1 , Did you get a chance to implement it? Would you kindly share the resolution if it is done?
... View more
03-30-2024
03:33 AM
1 Kudo
I have try something like below which worked. Once Dim1 is completed Dim2:- 1: Attribute to capture StartDate - UpdateAttribute 2: Create Log table in database which will maintain the LastRunTime 3: Custom SQL where updateddate>=$LastRunTime using ExecuteSQL 4: Insert the records in database 5: Update the log file with StartTime attribute from step one1
... View more
03-25-2024
10:29 AM
Hi @MattWho, GenerateTableFetch is a good option, but the issue is that it doesn't support custom queries I believe additional settings are required for calling the next processor via REST API, correct? Is there an alternative method to achieve this? For example, in a finance data mart, there will be 15-20 groups of flows will be executing after each other. Maintaining or calling them via API would entail additional work.
... View more
03-25-2024
05:05 AM
1 Kudo
Hello, "I have a QueryDatabaseTable processor where I've written custom SQL code and max value property ti pick only latest data from source. Suppose I'm creating the Dim1 flow. I want to ensure that once Dim1 is finished, Dim2 should start. However, QueryDatabaseTable does not accept input. How can I enable this solution?
... View more
Labels:
- Labels:
-
Apache NiFi
03-25-2024
04:39 AM
1 Kudo
Hi @Dataengineer1, Did you get a chance to impliment it? Try below to impliment it. If worked please create document and upload in community to help others 🙂 https://www.youtube.com/watch?v=j-JXo3xPxOk
... View more
03-25-2024
04:10 AM
1 Kudo
Hello @Dataengineer1 , Even I was looking for similar sort of solution, latest version seems different from the older one. New toolkit does not have the standalone command to generate the certificate. Check below video might help you. ( This is old vlog) https://www.youtube.com/watch?v=LanpbWR7Gv8
... View more
03-05-2024
10:53 AM
Hello @MattWho , Thank you for the reply. Initially, I thought it would be best to include all the details in the ticket to avoid any confusion. Here's what I discovered: When the API was first executed, it fetched 100 records. Let's consider that all issues in JIRA contain different types of information, such as packages, bugs, epics, stories, and modules and they will have different columns . When fetching a small number of rows, the data appeared consistent, but when fetching data for 4-5 months, the data structure changed. For example, a bug might have 3 columns, a story might have 8, and epic might have 3. Consequently, all records had missing columns filled with null values. Here's an example: Bug 1: Columns A, B, C
Story 2: Columns A, B, C, D, E, F, G
Epic 3: Columns A, B As a result, the final records looked like this: {
A: Value,
B: Value,
C: Value,
D.:null,
E: null,
F: null,
G: null
},
{
A: Value,
B: Value,
C: Value,
D.:Value,
E: Value,
F: Value,
G: Value
},
{
A: Value,
B: Value,
C: null,
D.:null,
E: null,
F: null,
G: null
} To address this issue, I used the SplitJSON processor to split the records and process them individually . This resolved the issue. However, after implementing this solution, I encountered another issue where the choice list was not inserting records. I managed to handle this issue as well, and now everything is working fine.
... View more
03-02-2024
09:28 AM
Hi Everyone, I am parsing data from JIRA portal, after all cleanup I am pushing the data in database. However my mapping running fine when I am loading the small amount of data, but its get failed when I am loading two months data which is around 800 rows (360 only loaded). however I am encountering an error java.lang.ClassCastException: null and in failure, I am not able to find the rows and reason which causing this issue. 2024-03-02 21:07:48,707 ERROR [Timer-Driven Process Thread-10] o.a.n.p.standard.PutDatabaseRecord PutDatabaseRecord[id=f10515e5-3b10-1c3c-802a-83b2ee68649a] Failed to put Records to database for StandardFlowFileRecord[uuid=0e2ec0fc-ebc5-4734-8e59-868a2aa0e6d5,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1709399268684-21833, container=default, section=329], offset=4, length=115143],offset=0,name=a016a8e7-82ae-4ed4-a51d-934d976fd173,size=115143]. Routing to failure.
java.lang.ClassCastException: null
2024-03-02 21:07:50,049 INFO [Cleanup Archive for default] o.a.n.c.repository.FileSystemRepository Successfully deleted 0 files (0 bytes) from archive
2024-03-02 21:07:50,049 INFO [Cleanup Archive for default] o.a.n.c.repository.FileSystemRepository Archive cleanup completed for container default; will now allow writing to this container. Bytes used = 39.08 GB, bytes free = 443.33 GB, capacity = 482.41 GB
2024-03-02 21:07:52,668 ERROR [Timer-Driven Process Thread-1] o.a.n.p.standard.PutDatabaseRecord PutDatabaseRecord[id=f10515e5-3b10-1c3c-802a-83b2ee68649a] Failed to put Records to database for StandardFlowFileRecord[uuid=30e55056-a871-4358-b033-7872641ee7b0,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1709399272628-21841, container=default, section=337], offset=4, length=142915],offset=0,name=a016a8e7-82ae-4ed4-a51d-934d976fd173,size=142915]. Routing to failure.
java.lang.ClassCastException: null
2024-03-02 21:07:54,361 INFO [pool-7-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository
2024-03-02 21:07:54,541 INFO [pool-7-thread-1] o.a.n.wali.SequentialAccessWriteAheadLog Checkpointed Write-Ahead Log with 20509 Records and 0 Swap Files in 180 milliseconds (Stop-the-world time = 12 milliseconds), max Transaction ID 6017851
2024-03-02 21:07:54,541 INFO [pool-7-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 20509 records in 180 milliseconds
2024-03-02 21:07:54,753 INFO [FileSystemRepository Workers Thread-3] o.a.n.c.repository.FileSystemRepository Successfully archived 7 Resource Claims for Container default in 0 millis
2024-03-02 21:07:57,190 ERROR [Timer-Driven Process Thread-6] o.a.n.p.standard.PutDatabaseRecord PutDatabaseRecord[id=f10515e5-3b10-1c3c-802a-83b2ee68649a] Failed to put Records to database for StandardFlowFileRecord[uuid=0ee70349-2ae7-4dbe-b60a-a2e52d429c65,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1709399277164-21848, container=default, section=344], offset=0, length=114332],offset=0,name=a016a8e7-82ae-4ed4-a51d-934d976fd173,size=114332]. Routing to failure.
java.lang.ClassCastException: null
2024-03-02 21:08:02,149 INFO [NiFi Web Server-345736] o.a.n.c.queue.AbstractFlowFileQueue Canceling ListFlowFile Request with ID f105168d-3b10-1c3c-f4f0-ecd43e5c4f15
2024-03-02 21:08:02,555 ERROR [Timer-Driven Process Thread-9] o.a.n.p.standard.PutDatabaseRecord PutDatabaseRecord[id=f10515e5-3b10-1c3c-802a-83b2ee68649a] Failed to put Records to database for StandardFlowFileRecord[uuid=63faf3f8-b0a1-4700-928a-e5e18aceb7ce,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1709399282448-21856, container=default, section=352], offset=4, length=73205],offset=0,name=a016a8e7-82ae-4ed4-a51d-934d976fd173,size=73205]. Routing to failure.
java.lang.ClassCastException: null
2024-03-02 21:08:07,322 ERROR [Timer-Driven Process Thread-4] o.a.n.p.standard.PutDatabaseRecord PutDatabaseRecord[id=f10515e5-3b10-1c3c-802a-83b2ee68649a] Failed to put Records to database for StandardFlowFileRecord[uuid=affcd389-081d-463e-846b-c43b73f3af16,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1709399287302-21864, container=default, section=360], offset=4, length=82425],offset=0,name=a016a8e7-82ae-4ed4-a51d-934d976fd173,size=82425]. Routing to failure.
java.lang.ClassCastException: null
2024-03-02 21:08:12,593 ERROR [Timer-Driven Process Thread-8] o.a.n.p.standard.PutDatabaseRecord PutDatabaseRecord[id=f10515e5-3b10-1c3c-802a-83b2ee68649a] Failed to put Records to database for StandardFlowFileRecord[uuid=b5edcbe2-594d-44da-93be-fbd3b7170d4a,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1709399292562-21873, container=default, section=369], offset=0, length=104546],offset=0,name=a016a8e7-82ae-4ed4-a51d-934d976fd173,size=104546]. Routing to failure.
java.lang.ClassCastException: null
2024-03-02 21:08:14,541 INFO [pool-7-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository
2024-03-02 21:08:14,735 INFO [pool-7-thread-1] o.a.n.wali.SequentialAccessWriteAheadLog Checkpointed Write-Ahead Log with 20537 Records and 0 Swap Files in 193 milliseconds (Stop-the-world time = 20 milliseconds), max Transaction ID 6017953
2024-03-02 21:08:14,735 INFO [pool-7-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 20537 records in 193 milliseconds
2024-03-02 21:08:14,756 INFO [FileSystemRepository Workers Thread-3] o.a.n.c.repository.FileSystemRepository Successfully archived 14 Resource Claims for Container default in 0 millis
2024-03-02 21:08:17,248 ERROR [Timer-Driven Process Thread-4] o.a.n.p.standard.PutDatabaseRecord PutDatabaseRecord[id=f10515e5-3b10-1c3c-802a-83b2ee68649a] Failed to put Records to database for StandardFlowFileRecord[uuid=caaf660b-1710-4603-8595-e674bc6f1f7b,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1709399297234-21881, container=default, section=377], offset=0, length=91115],offset=0,name=a016a8e7-82ae-4ed4-a51d-934d976fd173,size=91115]. Routing to failure.
java.lang.ClassCastException: null
2024-03-02 21:08:22,444 ERROR [Timer-Driven Process Thread-2] o.a.n.p.standard.PutDatabaseRecord PutDatabaseRecord[id=f10515e5-3b10-1c3c-802a-83b2ee68649a] Failed to put Records to database for StandardFlowFileRecord[uuid=ed18b46f-63e3-4d09-90e5-de7acbf9fb85,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1709399302433-21889, container=default, section=385], offset=0, length=98715],offset=0,name=a016a8e7-82ae-4ed4-a51d-934d976fd173,size=98715]. Routing to failure.
java.lang.ClassCastException: null
2024-03-02 21:08:27,369 ERROR [Timer-Driven Process Thread-9] o.a.n.p.standard.PutDatabaseRecord PutDatabaseRecord[id=f10515e5-3b10-1c3c-802a-83b2ee68649a] Failed to put Records to database for StandardFlowFileRecord[uuid=1ea0a28b-52b0-44be-870f-f4f8cf069052,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1709399307296-21897, container=default, section=393], offset=4, length=128665],offset=0,name=a016a8e7-82ae-4ed4-a51d-934d976fd173,size=128665]. Routing to failure.
java.lang.ClassCastException: null
2024-03-02 21:08:34,736 INFO [pool-7-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository
2024-03-02 21:08:34,930 INFO [pool-7-thread-1] o.a.n.wali.SequentialAccessWriteAheadLog Checkpointed Write-Ahead Log with 20557 Records and 0 Swap Files in 193 milliseconds (Stop-the-world time = 20 milliseconds), max Transaction ID 6018029
... View more
Labels:
- Labels:
-
Apache NiFi
03-02-2024
06:44 AM
Thank you very much @SAMSAL . Do we need to provide the JsonPath expressions ? I had simplifed data using JOLT Spec so I am getting flat JSON so below has worked. What about if we have structured data ? Another Question: what if I wanted to write the calculation on column rather than filed.value ? something like below ?
... View more