Support Questions

Find answers, ask questions, and share your expertise

Falcon mirroring assumptions and guarantees

avatar
Contributor

Do we have a detailed technical write-up on Falcon mirroring? It uses distcp under the hood, and I can only assume it uses the -update option, but are there any exceptions to how precisely it follows the distcp docs/functionality? I'm mostly concerned with partially-completed jobs that might have tmp files hanging around when the copy kicks off. I have a use case where the user would like to use mirroring to replicate 1..n feeds within a directory instead of setting up fine-grained feed replication, e.g.

mirror job 1=

- /data/cust/cust1

- /feed-1

- /feed-n

mirror job 2=

- /data/cust/cust2

- /feed-1

- /feed-n

Any info is appreciated.

1 ACCEPTED SOLUTION

avatar

Today replication in Falcon can be achieved using two ways:

1> Feed based Replication: Falcon uses pull based replication mechanism, meaning in every target cluster, for a given source cluster, a coordinator is scheduled which pulls the data using DistCp from source cluster. This requires data locations to be replicated to have dated partitions.

2> Using concept of Recipes:

HDFS Directory Replication Recipe

Overview

This recipe implements replicating arbitrary directories on HDFS from one Hadoop cluster to another Hadoop cluster. This piggy backs on replication solution in Falcon which uses the DistCp tool.

Use Case

* Copy directories between HDFS clusters with out dated partitions

* Archive directories from HDFS to Cloud. Ex: S3, Azure WASB

Limitations

As the data volume and number of files grow, this can get inefficient. User should make sure data already replicated is evicted else it will have performance issues.

For both of the above mechanisms, DistCp options can be passed as custom properties, which will be propagated to the DistCp tool.

  • maxMaps represents the maximum number of maps used during replication
  • mapBandwidth represents the bandwidth in MB/s used by each mapper during replication
  • overwrite represents overwrite destination during replication
  • ignoreErrors represents ignore failures not causing the job to fail during replication
  • skipChecksum represents bypassing checksum verification during replication
  • removeDeletedFiles represents deleting the files existing in the destination but not in source during replication
  • preserveBlockSize represents preserving block size during replication
  • preserveReplicationNumber represents preserving replication number during replication
  • preservePermission represents preserving permission during replication

View solution in original post

5 REPLIES 5

avatar

Today replication in Falcon can be achieved using two ways:

1> Feed based Replication: Falcon uses pull based replication mechanism, meaning in every target cluster, for a given source cluster, a coordinator is scheduled which pulls the data using DistCp from source cluster. This requires data locations to be replicated to have dated partitions.

2> Using concept of Recipes:

HDFS Directory Replication Recipe

Overview

This recipe implements replicating arbitrary directories on HDFS from one Hadoop cluster to another Hadoop cluster. This piggy backs on replication solution in Falcon which uses the DistCp tool.

Use Case

* Copy directories between HDFS clusters with out dated partitions

* Archive directories from HDFS to Cloud. Ex: S3, Azure WASB

Limitations

As the data volume and number of files grow, this can get inefficient. User should make sure data already replicated is evicted else it will have performance issues.

For both of the above mechanisms, DistCp options can be passed as custom properties, which will be propagated to the DistCp tool.

  • maxMaps represents the maximum number of maps used during replication
  • mapBandwidth represents the bandwidth in MB/s used by each mapper during replication
  • overwrite represents overwrite destination during replication
  • ignoreErrors represents ignore failures not causing the job to fail during replication
  • skipChecksum represents bypassing checksum verification during replication
  • removeDeletedFiles represents deleting the files existing in the destination but not in source during replication
  • preserveBlockSize represents preserving block size during replication
  • preserveReplicationNumber represents preserving replication number during replication
  • preservePermission represents preserving permission during replication

avatar
Contributor

Ok, so it's a 1-to-1 mapping of the DistCP functionality that we currently choose to expose (I added the features for maxMaps and mapBandwidth 🙂 ). Incidentally, in HDP 2.3 the Falcon UI does not have a way to include mirror job parameters. You can do it with the traditional feed definitions.

avatar

For mirroring using recipes you can do it using cmd line. I will create a bug to track mirroring UI not having a way to include mirror job parameters. Thanks for bringing that up!

avatar

https://hortonworks.jira.com/browse/BUG-46884 has been created to track the UI issue.

avatar

Falcon supports mirroring for HDFS and Hive.

Performance issue I mentioned above is only for HDFS mirroring, if replicated data is not evicted. This is because for Hive mirroring , last successfully replicated event id will be saved in the data store by Falcon and next replication job will start replication past the last successfully replicated event id. Also Falcon cleans up staging paths used for export after the job runs. As DistCP will get only the new data to be replicated there is no performance overhead for Hive mirroring.

Just an FYI.