Hello hrishi1dypim,
From what I've seen and learned, making a single Hadoop cluster that spans data centers is not a good approach. One reason is that the slower network links between data centers impairs the performance of the cluster too much. Also, high availability functions for services like HDFS and YARN only allow two master-type daemons running (namenodes for HDFS, resource managers for YARN), so creating some arrangement with four of them would be challenging for sure.
Keeping data in cloud storage services that are naturally redundant and highly available is a better path. You can set up data replication to span availability zones or regions. Then, the COB (I assume that's "Continuity Of Business") cluster can be ready to use the data from its data center if something happens to production. Using cloud storage services is probably chaper and easier than sending data yourself from cluster to cluster.
Doing this effectively implies that data resident on the cluster, like in HDFS, is safe to lose if an outage occurs, and that it can be reconstructed on the other cluster from the same underlying data. The benefit of designing workloads like this, however, is that the clusters themselves become less critical, providing you can spin new ones up when necessary, using Director for example.
There are plenty of patterns that can be imagined here, so I'm interested in hearing what others have done too.