Is it practical/possible to have an ambari-managed client/edge node that can talk to 2 different hadoop clusters, on different versions of HDP?
Scenario would be a 3rd-party ETL product being used for ingestion on a few edge nodes talking to cluster A, running HDP 2.4.2 on RHEL 6. We're standing up a new cluster B running HDP 2.6.x on RHEL 7, and need to "move" data ingest feeds from A to B.
To minimize the provisioning of extra edge nodes (and perhaps additional upstream infrastructure) the preference would be to have edge nodes write to both clusters for a period of time.
Anyone been thru something similar? Know of online resources describing options? Thoughts/suggestions?