Support Questions
Find answers, ask questions, and share your expertise

How to handle HDP cluster configuration ?


Hi everybody,

I am currently working on a cluster migration from HDP 2.2 to HDP 2.6. All in all I will have to migrate 15 different clusters so I am thinking of automation. I know blueprints, which can be used for an initial cluster creation, but not for cluster modifications. So I am finally not sure how to do the migrations.

Here is what I need to do :

  • Upgrade from HDP 2.2. to HDP 2.4, and from HDP 2.4 to HDP 2.6 (this is OK)
  • Migrate the configuration from HDP 2.2 to default values or should I do the upgrade first ?
    • So should I restore the default (recommended) values with HDP (how to do this without doing hundreds of clicks)
    • And then apply the specific configuration (HDO 2.6) using which API ?
  • Create blueprints of the HDP 2.6 clusters (for later cluster creation) (with all or just the overridden values ?)

As you can read, my goal is to have all clusters upgraded with the "correct" configuration. I have basically two kind of clusters (so I can reuse the configurations) (small and large).

What do you think ?

Kind regards

Manfred PAUL


Super Mentor

@Manfred PAUL

Blueprint is a cluster creation template that is good for automating the cluster creation.

However if you already have 15 cluster created and want to upgrade them to HDP 2.6 then the i see the only approach will be to perform the Rolling Upgrade / Express Upgrade as it will take care of a lots of things like taking care of Schema Upgrade for NameNode / Hive Metastore ...etc.

The best approach for the upgrade will be to use the Rolling Upgrade / Express Upgrade options from Ambari UI. With the ambari managed upgrade we get a better option to pause the upgrade when needed and also to finalize the upgrade or rollback in case of any issue.



But how (in case of an Express or Rolling Upgrade) I should handle the configuration upgrade? I forget to mention that we have customized some properties (for example : "hadoop-env template"). We added for example JMX for all namenodes/datanodes and hbase etc. In this case, Ambari won't upgrade the properties since they have been modified. An here comes the problem: As the properties (hadoop_env, hbase_env...) have changed to use Java 8, how should I handle mine?

Here is what I could do:

  1. upgrade Ambari from 2.0 to 2.2
  2. change config to recommended (I don't know if this feature is already available in 2.2) values (Java 8 support for example)
  3. use express upgrade to upgrade HDP from 2.2 to 2.4
  4. upgrade Ambati from 2.2 to 2.5 (or 2.6)
  5. change config to recommended values (? do I need this)
  6. use express upgrade to upgrade HDP from 2.4 to 2.6
  7. apply custom config on top of a HDP 2.6 cluster using Ambari 2.5 (or 2.6)

If I have to do this for all the 15 clusters, this will take some time. So I am searching for automation as well.

Thank you

; ;