Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Apache Nifi best practice to implement data replication across databases

avatar
Contributor

Hi Nifi Experts,

We are looking for best practice / suggestion for our use case below.

a) There are around 100 tables in source database

b) This source data needs to be synced into various target databases (after optional transformation)

c) Each source tables may need to be synced to multiple target databases tables.  So one extract on source table, might be used to load into multiple target tables which have similar structures

d) As of today, we have implemented flows using the core processors ConsumeAMQP - ExecuteSQLRecord - PutDatabaseRecord .

e) The challenge we face today is with the growing number of processors in proportion with the number of source/target tables.

We are looking for a solution to minimize the number of processors by sharing ExecuteSQLRecord - PutDatabaseRecord processors for multiple tables sync.

As a proof of concept, we tried to assign the database name, query, target database, target table name, keyfield etc., dynamically by using LookupRecord processor.  Internally, tested with SimpleDatabase lookup service and Properfile lookup service to help assign required attributes dynamically.

Please advise if this is good or any other best practice to handle data sync among tables dynamically. Core requirement is to have generic flows for multiple tables rather than dedicated flow for each table.

Please let me know if more details are required.  Thanking you in advance!

 

0 REPLIES 0