For the provisioning and maintenance we have an automated deploy and test process.
Deployment is done with Ansible. Everytime we change something in our Ansible scripts, we deploy to fresh VM's, do automated tests
and destroy the VM's.
Now for upgrading to HDF 3.2 our first stept is to change our scripts to install HDF 3.2 and then to create a fresh cluster.
About the rollout of HDF 3.2 to our real environment we did not yet decide about the approach.
a) we automate the full upgrade process (https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.2.0/ambari-managed-hdf-upgrade/content/upgrading_hdf.html) b) backup of states, we reinstall the cluster, restore states
Because the Blueprint looks different with the new version we created a cluster with the Wizard on Amabari UI, downloaded the blueprint.zip (with blueprint.json and clustertemplate.json).
Later sending the blueprint to the Ambari API works /api/v1/blueprints/:blueprintname and then sending the clustertemplate to /api/v1/clusters/:clustername ends in an error response:
status 400: Stack information should be provided when creating a cluster