Member since
07-07-2022
7
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1544 | 08-05-2022 03:26 PM |
09-04-2022
04:19 PM
@araujo Thank you for the help.. It works.
... View more
09-02-2022
12:22 AM
Hi, In a clustered nifi setup, if a node which has some flow files queued goes down and cant be brought back up, i understand manual intervention is required to redirect the flow files from crashed node to active node. I tried copying the flow files from flowfile and content repository of crashed node to active node. But it didnt help. Can anyone please help with what has to be done to redirect the flow files from crashed node to active node? Thanks
... View more
Labels:
- Labels:
-
Apache NiFi
08-25-2022
06:31 PM
Hi, We are migrating data from one db to another using NIFI. We are using GenerateTableFetch processor for the incremental fetch by setting a column in Max-Value columns. For logging purpose, i need to extract the state of the GenerateTableFetch processor and insert in a table. By state I mean the max value of the 'Max-Value column' that is captured as the state of the processor. Can someone please help with how can i extract the processor state? Thank you.
... View more
Labels:
- Labels:
-
Apache NiFi
08-05-2022
03:26 PM
Thanks Matt for your response. We will have a clustered setup and I have implemented exactly what you described. But there are still concerns coming up about losing the state. One of the examples given is if the state is stored in-memory. Again I am new to NIFI and I am not sure if such configuration is possible. But I still want to try if a stateless implementation is possible. Thanks again Matt for the help. Would appreciate if I can get any further ideas from the community.
... View more
08-04-2022
10:41 PM
Hi, I am new to nifi and stuck with two issues for which i need help:- The task is to migrate a set of tables from source to target database. The data will have to be filtered as well based on a field before migrating. The pipeline has to migrate historical data as well as perform the incremental fetch. 1. For incremental fetch, I used the GenerateTableFetch processor and set the max-value column to a date field in the table. I also used the partitioning feature to get the data in chunks by setting the partitioning size and column. It all works well. But considering it is a stateful processor, i have received feedback to go stateless as we can loose the state in case of node crash. How can i achieve the incremental fetch and partitioning feature in a stateless manner? 2. The set of tables to be migrated have a parent child relationship. The incremental fetch is required for all the tables. However child tables don't have any such column that can be used to fetch the delta and it will have to rely on the parent table's lastUpdatedTime field. Although this also has to be done in a stateless manner, i did try using QueryDatabaseTable processor by setting a join query between child and parent in the 'Custom Query' field and also setting the parents lastUpdatedTime in the max value column. But that didn't work. Can someone please help with how to achieve both the features in a stateless manner? Thanks Appreciate your help.
... View more
Labels:
- Labels:
-
Apache NiFi
07-10-2022
04:15 AM
@hegdemahendra Thank you for your reply.. It will be a continuous migration that will run for few months... Once all the historic data is migrated, it will be a CDC. The no of tables could be close to 20-30 tables.. I dont know yet about the volume but it will be huge... Based on your response, the solution looks like the combined approach for multiple tables and CDC. Could you please share little more details about this approach?
... View more
07-07-2022
08:52 PM
Hi, I have to work on nifi data flow to migrate data from one database to another database from multiple schemas. I have to migrate only specific set of tables with a filtering applied to each of the tables. I came up with a sample flow where i used one ExecuteSql Processor for each table that has to be migrated. In the Sql query for the processor i applied the filter criteria and then connected that processor to an individual PutDatabaseRecord processor for each destination table. Just want to understand is that the right approach to have separate processors for each source and destination table? Also would appreciate some guidance on what are the best processors to be considered for data migration from one db to another? This will be a continuous migration to be used in Production environment that will keep running for a couple of months. Appreciate any help. Thanks
... View more
Labels:
- Labels:
-
Apache NiFi