This example Feed entity below demonstrates the following:
Cross-cluster replication of a Data Set
The native use of a Hive/HCatalog table in Falcon
The definition of a separate retention policy for the source and target tables in replication.
Make sure all oozie servers that falcon talks to has the hadoop configs configured in oozie-site.xml
For example in my case I have added below in my target cluster's oozie:
<description>Comma separated AUTHORITY=HADOOP_CONF_DIR, where AUTHORITY is the HOST:PORT of the Hadoop service (JobTracker, HDFS). The wildcard '*' configuration is used when there is no exact match for an authority. The HADOOP_CONF_DIR contains the relevant Hadoop *-site.xml files. If the path is relative is looked within the Oozie configuration directory; though the path can be absolute (i.e. to point to Hadoop client conf/ directories in the local filesystem.</description>
Here /etc/src_hadoop/conf is configuration files(/etc/hadoop/conf) copied from source cluster to target cluster's oozie server.
Also ensure that from your target cluster, oozie can submit jobs in source cluster.
This can be done by setting below property:
Finally schedule the feed as below which will submit oozie co-ordinaton on target cluster.