Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

hdp installation using ambari blueprint

hdp installation using ambari blueprint

I am installing cluster with the help of ambari blueprint exporting the blueprint from another cluster .

My query is when we create hostmapping.json file, do we need to have same number of hosts as similar to the cluster from where we are getting blueprint .

I can see some fields like below when we get ambari blueprint from another cluster :

"name" : "host_group_5",

"cardinality" : "1"

and when we create hostmapping.json file we see :

"host_groups" :[ { "name" : "host_group5", "hosts" : [ { "fqdn" : "prodnode1.openstacklocal" } ] },

My question is do we need same number of hosts in present cluster as comparision with the cluster from where we are getting amabri-blueprint .

my second query is :

I am trying to fetch ambari blueprint from another cluster but I could see last lines as :

"configurations" : [ ],

"name" : "host_group_5",

"cardinality" : "1" } ]

, "Blueprints" : { "stack_name" : "HDP", "stack_version" : "2.5" }

but i could see there are more number of nodes in the cluster I have extracted ambari-blueprint.

2 REPLIES 2

Re: hdp installation using ambari blueprint

Super Mentor

@Anurag Mishra

Regarding your query:

-- > Do we need same number of hosts in present cluster as comparision with the cluster from where we are getting amabri-blueprint .

>>>> No we can create the hostmapping.json file based on our requierment, It can have more or less number of hosts compared to the actual Cluster from where the Blueprint was extracted (exported).

Even we can write the whole Blueprint JSON file and Host mapping file from scratch.

You can edit your blueprint JSON file to match the hostgroup mapping JSON file based on your hostmapping JSON file.

.

Re: hdp installation using ambari blueprint

Rising Star

@Anurag Mishra

No you don't need the same number of hosts as present in the original cluster. However you would need to map the hosts and/or their hostgroups to their relevant configurations. Say for instance u have zookeeper on 1 node on your dev setup but then in your next environment you have a 3 node zookeeper so your quorum property will change for anything that uses zookeeper (eg. hbase, hive etc). The specific value will then need to point to the specific host_group name and the blueprint processor will take care of the rest. However I have found this to be limiting although it may work in more general scenarios where you scale out i.e have the same profile of machines in every environment. Also there were a few bugs like the hive jdbc url is not processed by the blueprint processor.

Indeed the approach I took to ansibilize the installation and it may be similar to any other automation tool you may use is to supply the values into the configuration from say your ansible inventory i.e. automate the values that need to be set for the hosts. and then define the host_groups using those hosts and assign the components to those host_groups. This maintains flexibility and then eventually in an environment which maps your production environment u can just scale out.

Note that in version 2.6 and beyond you would need to register the stack version using the VDF file rather than specifying the stack version in the blueprint.