I have a HDF-22.214.171.124 deploy managed by an Ambari 126.96.36.199.
Now the requirement is to have Atlas extracting Nifi metadata. That Atlas preferably has to be managed by the same Ambari currently running HDF.
Problem is that all docs and HCC posts seem to handle the opposite (Installing HDF on Ambari already running HDP) and not this particular use case (Installing HDP on Ambari already running HDF).
So far Ambari does seems to support it, I can register HDP-188.8.131.52 as a new version:
But when I click the fly out at "Install On" and choose the current cluster Ambari takes me back to the versions screen after a short while, where the HDP stack is then NOT appearing:
There is no error, no warning, nowhere, not even in the ambari-server logs
It this possible with Ambari 184.108.40.206 ? Am I missing a step?
What is so different in this scenario (first HDF, then HDP) from installing HDF on top of HDP?
I really hope that I don't need to start over completely with this Ambari, install HDP first (for Atlas) and then put HDF back on it (this would erase all my HDF settings)
Any advise greatly appreciated
@Jasper did you find a better solution, that to re-install (first HDP than HDF) ?
Anyway AFAIU indeed it was/is not supported to start with an HDF cluster, and then add HDP (components), but only the other way around!