Created 10-27-2015 04:58 PM
Do we have a detailed technical write-up on Falcon mirroring? It uses distcp under the hood, and I can only assume it uses the -update option, but are there any exceptions to how precisely it follows the distcp docs/functionality? I'm mostly concerned with partially-completed jobs that might have tmp files hanging around when the copy kicks off. I have a use case where the user would like to use mirroring to replicate 1..n feeds within a directory instead of setting up fine-grained feed replication, e.g.
mirror job 1=
- /data/cust/cust1
- /feed-1
- /feed-n
mirror job 2=
- /data/cust/cust2
- /feed-1
- /feed-n
Any info is appreciated.
Created 10-27-2015 05:59 PM
Today replication in Falcon can be achieved using two ways:
1> Feed based Replication: Falcon uses pull based replication mechanism, meaning in every target cluster, for a given source cluster, a coordinator is scheduled which pulls the data using DistCp from source cluster. This requires data locations to be replicated to have dated partitions.
2> Using concept of Recipes:
HDFS Directory Replication Recipe
Overview
This recipe implements replicating arbitrary directories on HDFS from one Hadoop cluster to another Hadoop cluster. This piggy backs on replication solution in Falcon which uses the DistCp tool.
Use Case
* Copy directories between HDFS clusters with out dated partitions
* Archive directories from HDFS to Cloud. Ex: S3, Azure WASB
Limitations
As the data volume and number of files grow, this can get inefficient. User should make sure data already replicated is evicted else it will have performance issues.
For both of the above mechanisms, DistCp options can be passed as custom properties, which will be propagated to the DistCp tool.
Created 10-27-2015 05:59 PM
Today replication in Falcon can be achieved using two ways:
1> Feed based Replication: Falcon uses pull based replication mechanism, meaning in every target cluster, for a given source cluster, a coordinator is scheduled which pulls the data using DistCp from source cluster. This requires data locations to be replicated to have dated partitions.
2> Using concept of Recipes:
HDFS Directory Replication Recipe
Overview
This recipe implements replicating arbitrary directories on HDFS from one Hadoop cluster to another Hadoop cluster. This piggy backs on replication solution in Falcon which uses the DistCp tool.
Use Case
* Copy directories between HDFS clusters with out dated partitions
* Archive directories from HDFS to Cloud. Ex: S3, Azure WASB
Limitations
As the data volume and number of files grow, this can get inefficient. User should make sure data already replicated is evicted else it will have performance issues.
For both of the above mechanisms, DistCp options can be passed as custom properties, which will be propagated to the DistCp tool.
Created 10-28-2015 12:28 AM
Ok, so it's a 1-to-1 mapping of the DistCP functionality that we currently choose to expose (I added the features for maxMaps and mapBandwidth 🙂 ). Incidentally, in HDP 2.3 the Falcon UI does not have a way to include mirror job parameters. You can do it with the traditional feed definitions.
Created 10-28-2015 12:58 AM
For mirroring using recipes you can do it using cmd line. I will create a bug to track mirroring UI not having a way to include mirror job parameters. Thanks for bringing that up!
Created 10-28-2015 01:06 AM
https://hortonworks.jira.com/browse/BUG-46884 has been created to track the UI issue.
Created 10-28-2015 01:21 AM
Falcon supports mirroring for HDFS and Hive.
Performance issue I mentioned above is only for HDFS mirroring, if replicated data is not evicted. This is because for Hive mirroring , last successfully replicated event id will be saved in the data store by Falcon and next replication job will start replication past the last successfully replicated event id. Also Falcon cleans up staging paths used for export after the job runs. As DistCP will get only the new data to be replicated there is no performance overhead for Hive mirroring.
Just an FYI.