Community Articles

Find and share helpful community-sourced technical articles.
Labels (1)
avatar
Master Guru

In previous post we have seen how to Automate HDP installation with Kerberos authentication on multi node cluster using Ambari Blueprints.

In this post, we will see how to deploy multi-node node HDP Cluster with Resource Manager HA via Ambari blueprint.

.

Below are simple steps to install HDP multi node cluster with Resource Manager HA using internal repository via Ambari Blueprints.

.

Note - For Ambari 2.6.X onwards, we will have to register VDF to register internal repository, or else Ambari will pick up latest version of HDP and use the public repos. please see below document for more information. For Ambari version less than 2.6.X, this guide will work without any modifications.

Document - https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-release-notes/content/ambari_reln...

.

Step 1: Install Ambari server using steps mentioned under below link

http://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-installation/content/ch_Installing...

.

Step 2: Register ambari-agent manually

Install ambari-agent package on all the nodes in the cluster and modify hostname to ambari server host(fqdn) in /etc/ambari-agent/conf/ambari-agent.ini

.

Step 3: Configure blueprints

Please follow below steps to create Blueprints

.

3.1 Create hostmap.json(cluster creation template) file as shown below:

Note – This file will have information related to all the hosts which are part of your HDP cluster. This is also called as cluster is creation template as per Apache Ambari documentation.

{
  "blueprint" : "hdptest",
  "default_password" : "hadoop",
  "host_groups" :[
{
      "name" : "blueprint1",
      "hosts" : [
        {
          "fqdn" : "blueprint1.crazyadmins.com"
        }
      ]
    },
{
      "name" : "blueprint2",
      "hosts" : [
        {
          "fqdn" : "blueprint2.crazyadmins.com"
        }
      ]
    },
{
      "name" : "blueprint3",
      "hosts" : [
        {
          "fqdn" : "blueprint3.crazyadmins.com"
        }
      ]
    }
  ]
}

.

3.2 Create cluster_config.json(blueprint) file, it contents mapping of hosts to HDP components

{
  "configurations" : [
    {
      "core-site": {
        "properties" : {
          "fs.defaultFS" : "hdfs://%HOSTGROUP::blueprint1%:8020"
      }}
    },{
      "yarn-site" : {
        "properties" : {
          "hadoop.registry.rm.enabled" : "false",
          "hadoop.registry.zk.quorum" : "%HOSTGROUP::blueprint3%:2181,%HOSTGROUP::blueprint2%:2181,%HOSTGROUP::blueprint1%:2181",
          "yarn.log.server.url" : "http://%HOSTGROUP::blueprint3%:19888/jobhistory/logs",
          "yarn.resourcemanager.address" : "%HOSTGROUP::blueprint2%:8050",
          "yarn.resourcemanager.admin.address" : "%HOSTGROUP::blueprint2%:8141",
          "yarn.resourcemanager.cluster-id" : "yarn-cluster",
          "yarn.resourcemanager.ha.automatic-failover.zk-base-path" : "/yarn-leader-election",
          "yarn.resourcemanager.ha.enabled" : "true",
          "yarn.resourcemanager.ha.rm-ids" : "rm1,rm2",
          "yarn.resourcemanager.hostname" : "%HOSTGROUP::blueprint2%",
          "yarn.resourcemanager.hostname.rm1" : "%HOSTGROUP::blueprint2%",
          "yarn.resourcemanager.hostname.rm2" : "%HOSTGROUP::blueprint3%",
          "yarn.resourcemanager.webapp.address.rm1" : "%HOSTGROUP::blueprint2%:8088",
          "yarn.resourcemanager.webapp.address.rm2" : "%HOSTGROUP::blueprint3%:8088",
          "yarn.resourcemanager.recovery.enabled" : "true",
          "yarn.resourcemanager.resource-tracker.address" : "%HOSTGROUP::blueprint2%:8025",
          "yarn.resourcemanager.scheduler.address" : "%HOSTGROUP::blueprint2%:8030",
          "yarn.resourcemanager.store.class" : "org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore",
          "yarn.resourcemanager.webapp.address" : "%HOSTGROUP::blueprint2%:8088",
          "yarn.resourcemanager.webapp.https.address" : "%HOSTGROUP::blueprint2%:8090",
          "yarn.timeline-service.address" : "%HOSTGROUP::blueprint3%:10200",
          "yarn.timeline-service.webapp.address" : "%HOSTGROUP::blueprint3%:8188",
          "yarn.timeline-service.webapp.https.address" : "%HOSTGROUP::blueprint3%:8190"
        }
      }
    }
],
  "host_groups" : [
{
      "name" : "blueprint1",
      "components" : [
{
			"name" : "NAMENODE"
},
{
			"name" : "NODEMANAGER"
},
{
			"name" : "DATANODE"
},
{
			"name" : "ZOOKEEPER_CLIENT"
},
{
			"name" : "HDFS_CLIENT"
},
{
			"name" : "YARN_CLIENT"
},
{
			"name" : "MAPREDUCE2_CLIENT"
},
{
			"name" : "ZOOKEEPER_SERVER"
}
				],
      			        "cardinality" : 1
},
{
      "name" : "blueprint2",
      "components" : [
{
			"name" : "SECONDARY_NAMENODE"
},
{
                        "name" : "RESOURCEMANAGER"
},
{
			"name" : "NODEMANAGER"
},
{
			"name" : "DATANODE"
},
{
			"name" : "ZOOKEEPER_CLIENT"
},
{
			"name" : "ZOOKEEPER_SERVER"
},
{
			"name" : "HDFS_CLIENT"
},
{
			"name" : "YARN_CLIENT"
},
{
			"name" : "MAPREDUCE2_CLIENT"
}
				],
      			        "cardinality" : 1
},
{
      "name" : "blueprint3",
      "components" : [
{
			"name" : "RESOURCEMANAGER"
},
{
			"name" : "APP_TIMELINE_SERVER"
},
{
			"name" : "HISTORYSERVER"
},
{
			"name" : "NODEMANAGER"
},
{
			"name" : "DATANODE"
},
{
			"name" : "ZOOKEEPER_CLIENT"
},
{
			"name" : "ZOOKEEPER_SERVER"
},
{
			"name" : "HDFS_CLIENT"
},
{
			"name" : "YARN_CLIENT"
},
{
			"name" : "MAPREDUCE2_CLIENT"
}
				],
      			        "cardinality" : 1
}
  ],
  "Blueprints" : {
    "blueprint_name" : "hdptest",
    "stack_name" : "HDP",
    "stack_version" : "2.5"
  }
}

Note - I have kept Resource Managers on blueprint1 and blueprint2, you can change it according to your requirement.

.

Step 4: Create an internal repository map

.

4.1: hdp repository – copy below contents, modify base_url to add hostname/ip-address of your internal repository server and save it in repo.json file.

{
"Repositories":{
"base_url":"http://<ip-address-of-repo-server>/hdp/centos6/HDP-2.5.3.0",
"verify_base_url":true
}
}

.

4.2: hdp-utils repository – copy below contents, modify base_url to add hostname/ip-address of your internal repository server and save it in hdputils-repo.json file.

{
"Repositories":{
"base_url":"http://<ip-address-of-repo-server>/hdp/centos6/HDP-UTILS-1.1.0.21",
"verify_base_url":true
}
}

.

Step 5: Register blueprint with ambari server by executing below command

curl -H "X-Requested-By: ambari"-X POST -u admin:admin http://<ambari-server-hostname>:8080/api/v1/blueprints/multinode-hdp -d @cluster_config.json

.

Step 6: Setup Internal repo via REST API.

Execute below curl calls to setup internal repositories.

curl -H "X-Requested-By: ambari"-X PUT -u admin:admin http://<ambari-server-hostname>:8080/api/v1/stacks/HDP/versions/2.4/operating_systems/redhat6/reposi... -d @repo.json
curl -H "X-Requested-By: ambari"-X PUT -u admin:admin http://<ambari-server-hostname>:8080/api/v1/stacks/HDP/versions/2.4/operating_systems/redhat6/reposi... -d @hdputils-repo.json

.

Step 7: Pull the trigger! Below command will start cluster installation.

curl -H "X-Requested-By: ambari" -X POST -u admin:admin http://<ambari-server-hostname>:8080/api/v1/clusters/multinode-hdp -d @hostmap.json

.

Please feel free to comment if you need any further help on this. Happy Hadooping!! :)

4,321 Views
Comments
avatar
Rising Star

One thing you will want to change is you are missing a <space> in your curl command! You should have a space between "X-Requested-By: ambari" and -X. For example, step 7 should look like this:

curl -H "X-Requested-By: ambari" -X POST -u admin:admin http://<ambari-server-hostname>:8080/api/v1/clusters/multinode-hdp -d @hostmap.json

You will want to update this for all your curl examples that have this issue on any of your helpful guides!

avatar
Master Guru

Thanks @Chad Woodhead - Updated! 🙂