Created on 03-02-2024 09:28 AM - edited 03-02-2024 10:42 AM
Hi Everyone,
I am parsing data from JIRA portal, after all cleanup I am pushing the data in database. However my mapping running fine when I am loading the small amount of data, but its get failed when I am loading two months data which is around 800 rows (360 only loaded). however I am encountering an error java.lang.ClassCastException: null and in failure, I am not able to find the rows and reason which causing this issue.
2024-03-02 21:07:48,707 ERROR [Timer-Driven Process Thread-10] o.a.n.p.standard.PutDatabaseRecord PutDatabaseRecord[id=f10515e5-3b10-1c3c-802a-83b2ee68649a] Failed to put Records to database for StandardFlowFileRecord[uuid=0e2ec0fc-ebc5-4734-8e59-868a2aa0e6d5,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1709399268684-21833, container=default, section=329], offset=4, length=115143],offset=0,name=a016a8e7-82ae-4ed4-a51d-934d976fd173,size=115143]. Routing to failure.
java.lang.ClassCastException: null
2024-03-02 21:07:50,049 INFO [Cleanup Archive for default] o.a.n.c.repository.FileSystemRepository Successfully deleted 0 files (0 bytes) from archive
2024-03-02 21:07:50,049 INFO [Cleanup Archive for default] o.a.n.c.repository.FileSystemRepository Archive cleanup completed for container default; will now allow writing to this container. Bytes used = 39.08 GB, bytes free = 443.33 GB, capacity = 482.41 GB
2024-03-02 21:07:52,668 ERROR [Timer-Driven Process Thread-1] o.a.n.p.standard.PutDatabaseRecord PutDatabaseRecord[id=f10515e5-3b10-1c3c-802a-83b2ee68649a] Failed to put Records to database for StandardFlowFileRecord[uuid=30e55056-a871-4358-b033-7872641ee7b0,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1709399272628-21841, container=default, section=337], offset=4, length=142915],offset=0,name=a016a8e7-82ae-4ed4-a51d-934d976fd173,size=142915]. Routing to failure.
java.lang.ClassCastException: null
2024-03-02 21:07:54,361 INFO [pool-7-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository
2024-03-02 21:07:54,541 INFO [pool-7-thread-1] o.a.n.wali.SequentialAccessWriteAheadLog Checkpointed Write-Ahead Log with 20509 Records and 0 Swap Files in 180 milliseconds (Stop-the-world time = 12 milliseconds), max Transaction ID 6017851
2024-03-02 21:07:54,541 INFO [pool-7-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 20509 records in 180 milliseconds
2024-03-02 21:07:54,753 INFO [FileSystemRepository Workers Thread-3] o.a.n.c.repository.FileSystemRepository Successfully archived 7 Resource Claims for Container default in 0 millis
2024-03-02 21:07:57,190 ERROR [Timer-Driven Process Thread-6] o.a.n.p.standard.PutDatabaseRecord PutDatabaseRecord[id=f10515e5-3b10-1c3c-802a-83b2ee68649a] Failed to put Records to database for StandardFlowFileRecord[uuid=0ee70349-2ae7-4dbe-b60a-a2e52d429c65,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1709399277164-21848, container=default, section=344], offset=0, length=114332],offset=0,name=a016a8e7-82ae-4ed4-a51d-934d976fd173,size=114332]. Routing to failure.
java.lang.ClassCastException: null
2024-03-02 21:08:02,149 INFO [NiFi Web Server-345736] o.a.n.c.queue.AbstractFlowFileQueue Canceling ListFlowFile Request with ID f105168d-3b10-1c3c-f4f0-ecd43e5c4f15
2024-03-02 21:08:02,555 ERROR [Timer-Driven Process Thread-9] o.a.n.p.standard.PutDatabaseRecord PutDatabaseRecord[id=f10515e5-3b10-1c3c-802a-83b2ee68649a] Failed to put Records to database for StandardFlowFileRecord[uuid=63faf3f8-b0a1-4700-928a-e5e18aceb7ce,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1709399282448-21856, container=default, section=352], offset=4, length=73205],offset=0,name=a016a8e7-82ae-4ed4-a51d-934d976fd173,size=73205]. Routing to failure.
java.lang.ClassCastException: null
2024-03-02 21:08:07,322 ERROR [Timer-Driven Process Thread-4] o.a.n.p.standard.PutDatabaseRecord PutDatabaseRecord[id=f10515e5-3b10-1c3c-802a-83b2ee68649a] Failed to put Records to database for StandardFlowFileRecord[uuid=affcd389-081d-463e-846b-c43b73f3af16,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1709399287302-21864, container=default, section=360], offset=4, length=82425],offset=0,name=a016a8e7-82ae-4ed4-a51d-934d976fd173,size=82425]. Routing to failure.
java.lang.ClassCastException: null
2024-03-02 21:08:12,593 ERROR [Timer-Driven Process Thread-8] o.a.n.p.standard.PutDatabaseRecord PutDatabaseRecord[id=f10515e5-3b10-1c3c-802a-83b2ee68649a] Failed to put Records to database for StandardFlowFileRecord[uuid=b5edcbe2-594d-44da-93be-fbd3b7170d4a,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1709399292562-21873, container=default, section=369], offset=0, length=104546],offset=0,name=a016a8e7-82ae-4ed4-a51d-934d976fd173,size=104546]. Routing to failure.
java.lang.ClassCastException: null
2024-03-02 21:08:14,541 INFO [pool-7-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository
2024-03-02 21:08:14,735 INFO [pool-7-thread-1] o.a.n.wali.SequentialAccessWriteAheadLog Checkpointed Write-Ahead Log with 20537 Records and 0 Swap Files in 193 milliseconds (Stop-the-world time = 20 milliseconds), max Transaction ID 6017953
2024-03-02 21:08:14,735 INFO [pool-7-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 20537 records in 193 milliseconds
2024-03-02 21:08:14,756 INFO [FileSystemRepository Workers Thread-3] o.a.n.c.repository.FileSystemRepository Successfully archived 14 Resource Claims for Container default in 0 millis
2024-03-02 21:08:17,248 ERROR [Timer-Driven Process Thread-4] o.a.n.p.standard.PutDatabaseRecord PutDatabaseRecord[id=f10515e5-3b10-1c3c-802a-83b2ee68649a] Failed to put Records to database for StandardFlowFileRecord[uuid=caaf660b-1710-4603-8595-e674bc6f1f7b,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1709399297234-21881, container=default, section=377], offset=0, length=91115],offset=0,name=a016a8e7-82ae-4ed4-a51d-934d976fd173,size=91115]. Routing to failure.
java.lang.ClassCastException: null
2024-03-02 21:08:22,444 ERROR [Timer-Driven Process Thread-2] o.a.n.p.standard.PutDatabaseRecord PutDatabaseRecord[id=f10515e5-3b10-1c3c-802a-83b2ee68649a] Failed to put Records to database for StandardFlowFileRecord[uuid=ed18b46f-63e3-4d09-90e5-de7acbf9fb85,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1709399302433-21889, container=default, section=385], offset=0, length=98715],offset=0,name=a016a8e7-82ae-4ed4-a51d-934d976fd173,size=98715]. Routing to failure.
java.lang.ClassCastException: null
2024-03-02 21:08:27,369 ERROR [Timer-Driven Process Thread-9] o.a.n.p.standard.PutDatabaseRecord PutDatabaseRecord[id=f10515e5-3b10-1c3c-802a-83b2ee68649a] Failed to put Records to database for StandardFlowFileRecord[uuid=1ea0a28b-52b0-44be-870f-f4f8cf069052,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1709399307296-21897, container=default, section=393], offset=4, length=128665],offset=0,name=a016a8e7-82ae-4ed4-a51d-934d976fd173,size=128665]. Routing to failure.
java.lang.ClassCastException: null
2024-03-02 21:08:34,736 INFO [pool-7-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository
2024-03-02 21:08:34,930 INFO [pool-7-thread-1] o.a.n.wali.SequentialAccessWriteAheadLog Checkpointed Write-Ahead Log with 20557 Records and 0 Swap Files in 193 milliseconds (Stop-the-world time = 20 milliseconds), max Transaction ID 6018029
Created on 03-05-2024 10:53 AM - edited 03-05-2024 10:54 AM
Hello @MattWho ,
Thank you for the reply. Initially, I thought it would be best to include all the details in the ticket to avoid any confusion.
Here's what I discovered: When the API was first executed, it fetched 100 records. Let's consider that all issues in JIRA contain different types of information, such as packages, bugs, epics, stories, and modules and they will have different columns . When fetching a small number of rows, the data appeared consistent, but when fetching data for 4-5 months, the data structure changed. For example, a bug might have 3 columns, a story might have 8, and epic might have 3. Consequently, all records had missing columns filled with null values. Here's an example:
Bug 1: Columns A, B, C
Story 2: Columns A, B, C, D, E, F, G
Epic 3: Columns A, B
As a result, the final records looked like this:
{
A: Value,
B: Value,
C: Value,
D.:null,
E: null,
F: null,
G: null
},
{
A: Value,
B: Value,
C: Value,
D.:Value,
E: Value,
F: Value,
G: Value
},
{
A: Value,
B: Value,
C: null,
D.:null,
E: null,
F: null,
G: null
}
To address this issue, I used the SplitJSON processor to split the records and process them individually . This resolved the issue. However, after implementing this solution, I encountered another issue where the choice list was not inserting records. I managed to handle this issue as well, and now everything is working fine.
Created 03-04-2024 05:49 AM
@saquibsk
Unfortunately, the exception "java.lang.ClassCastException: null" is not very helpful here making it very difficult to make any suggestions on where the issue within the data resides.
You might want to try putting the putDataBaseRecord processor logging in DEBUG within NiFi's logback.xml to see if it happens to produce more output that might be useful.
org.apache.nifi.processors.standard.PutDatabaseRecord
It is also a good idea to provide the exact version of Apache NIFi or CFM you are using as it is also useful when asking about issue in the community. It allows those assisting to narrow down the scope of where to look for known issues.
Thanks,
Matt
Created on 03-05-2024 10:53 AM - edited 03-05-2024 10:54 AM
Hello @MattWho ,
Thank you for the reply. Initially, I thought it would be best to include all the details in the ticket to avoid any confusion.
Here's what I discovered: When the API was first executed, it fetched 100 records. Let's consider that all issues in JIRA contain different types of information, such as packages, bugs, epics, stories, and modules and they will have different columns . When fetching a small number of rows, the data appeared consistent, but when fetching data for 4-5 months, the data structure changed. For example, a bug might have 3 columns, a story might have 8, and epic might have 3. Consequently, all records had missing columns filled with null values. Here's an example:
Bug 1: Columns A, B, C
Story 2: Columns A, B, C, D, E, F, G
Epic 3: Columns A, B
As a result, the final records looked like this:
{
A: Value,
B: Value,
C: Value,
D.:null,
E: null,
F: null,
G: null
},
{
A: Value,
B: Value,
C: Value,
D.:Value,
E: Value,
F: Value,
G: Value
},
{
A: Value,
B: Value,
C: null,
D.:null,
E: null,
F: null,
G: null
}
To address this issue, I used the SplitJSON processor to split the records and process them individually . This resolved the issue. However, after implementing this solution, I encountered another issue where the choice list was not inserting records. I managed to handle this issue as well, and now everything is working fine.