I am looking for the best solution to replicate data between CDP Public Cloud instances
I have found proposal with Nifi :
or using Distcp.
I am not sure that both solution are handling properly "data synchronization" (at least the Nifi solution seems not able to handle file delete not sure for Distcp)
What is the best way to proceed ?
Do you knwo if Cloudera Replication Manager will soon support CDP to CDP scenario in the cloud ?
I have several ideas in mind :
1) Nifi process at the file system level :
capture data with ListHDFS + FetchHDFS --> ingest data with PuHDFS
But what about file delete ?
I was thinking to use GetHDFSEvents to capture "unlink" events and replicate events with DeleteHDFS
But it seems that GetHDFSEvents is not compatible with ADLS Gen2 storage
Again seems working with new data and updated file but I don't understand how it can handle deletes (except if we drop the target data before a full copy)
Only compatible with ADLS (but I imagine that a similar tool is available for S3 buckets) with "azcopy sync" option
4) Nifi process at Hive level
Not sure if it's very elegant :
capture data with SelectHive3QL (Avro output)
Ingest data with PutHive3Streaming
But not sure how to manage deletes
Any best practice or other better idea ?
I mean if we delete data on the source CDP for whatever reason (purge, archiving, dataset rebuild) how to capture those events and replicate the delete action on the target CDP.
it seems that all of my proposal will only able to add data on the target.
@Delio It would be bit complicated flow design as direct support for delete events is not available for GetHDFSEvents processor yet.
But you can refer this solution, which can help you to move further.
If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here.