Member since
09-24-2019
4
Posts
0
Kudos Received
0
Solutions
06-28-2020
10:21 AM
Hi @tsvk4u, Please check the following jolt and tell me if it's okay [ { "operation": "shift", "spec": { "*": { "value_schema_id": "&", "system_code": "records[&1].value.&", "event_type": "records[&1].value.&", "metric_name": "records[&1].value.event_detail.&", "metric_id": "records[&1].value.event_detail.&", "create_ts": "records[&1].value.event_detail.&", "global_person_profile_id": "records[&1].value.event_detail.&" } } }, { "operation": "cardinality", "spec": { "value_schema_id": "ONE" } } ]
... View more
06-27-2020
11:48 AM
@tsvk4u If you replace records[1] with records[0]. You won't be getting null as the first value inside records array. But, if you have done it deliberately and you want to remove the null values, look at the following spec: [ { "operation": "shift", "spec": { "records": { "*": { "*": "records[]" } }, "*": "&" } } ] Note: This spec is to be applied after you have applied your spec
... View more
09-26-2019
08:13 AM
Hi Dennis, Thanks for the response. I already implemented the fetch size property as 20000 and Max Rows Per flow file is 20000. My question is is NIFI capable of handling 526 columns? The reason why I am asking this is, I can see that nifi is pulling the data, but the performance is not upto the mark. I would like to know is there a better approach, other than the splitting and re-joining the columns. I think if I go with around 200 columns, NiFi will be faster. -Thanks Vamcy
... View more