Support Questions

Find answers, ask questions, and share your expertise

Setting up 4 Node Cluster(1 Edge Node, 1 Master Node and 2 Slave Nodes) on Microsoft Azure using Cloudbreak

avatar
Rising Star

I am planning to install 4 Node Cluster(1 Edge Node, 1 Master Node and 2 Slave Nodes) on Microsoft Azure using cloudbreak.. I need components like HDFS, YARN, Spark, Hive, Kafka, Zookeeper, Ambari Server, Ambari Agent, Ranger, Flume, PostgreSQL to install on my 4 node cluster. And I am planning to follow below mentioned approach for setting up the cluster.

1) Install Cloudbreak on Edge Node first.

2) Create Ambari blueprints with below components.

2) Components to be installed on NameNode : HDFS_CLIENT, Resource Manager, Hive Server, Hive Metastore, Ambari Server, Oozie Server, Ooize Client, Kafka Broker, Zookeeper Server, Zookeeper Client, Spark Master, Flume Server, PostgreSQL, Spark Thrift Server, YARN_CLIENT, TEZ_CLIENT, Ranger

3) Slave Nodes: DataNode, Node Manager, Ambari Agent, Kafka Broker, Zookeeper Client, Zookeeper Server, Spark Worker, HDFS Client, Ranger

Kindly check if this is the correct approach and let me know in case i am missing something. And Kindly let me know the what all Hadoop clients need to be set up on Edge Node in above scenario?

Much appreciated.

1 ACCEPTED SOLUTION

avatar
Expert Contributor

That blueprint is invalid, because:

  • it shall not contain input{...} section
  • You should not add hosts to it, since Cloudbreak will fill the host section automatically and post it not in blueprint, but in a separate cluster creation template

Please find the updated blueprint attached. I tested it and it worked for me: fixed-hcc-blueprintjson.txt

View solution in original post

6 REPLIES 6

avatar
Rising Star
@Attila Kanto

Could you please suggest something?

avatar
Expert Contributor

hi @rahul gulati,

For the first look it seems quite ok, although there is one thing what I noticed that MYSQL_SERVER is missing for Hive metastore from the 2.) hostgroup.

By default Ambari can install MySQL for default db for hive metastore. For your reference I have attached the available services for HDP 2.5. But if you have a running ambari you can get from api/v1/stacks/HDP/versions/2.5/services?fields=components/StackServiceComponents

Attila

avatar
Rising Star

Thanks Attila for the information. I have made the blueprint based on scenario mentioned above. Could you please check and let me know if i missed anything. I am attaching the same.blueprintjson.txt

avatar
Expert Contributor

That blueprint is invalid, because:

  • it shall not contain input{...} section
  • You should not add hosts to it, since Cloudbreak will fill the host section automatically and post it not in blueprint, but in a separate cluster creation template

Please find the updated blueprint attached. I tested it and it worked for me: fixed-hcc-blueprintjson.txt

avatar
Rising Star

@Attila Kanto

I got to know yesterday that cloudbreak cannot install hadoop clusters into already running VM's. is that correct? you mentioned that we donot need to provide fqdn in blueprint file. Rather it should be present in cluster creation file. But in cloudbreak UI i donot see any way to upload cluster creation template? Could you please suggest. We are stuck as we already have 4 VM's running in azure but we donot know how to use cloudbreak to install hadoop in those VM's?

Much appreciated.

Thanks

Rahul

avatar
Super Collaborator

@rahul gulati

Unfortunately cloudbreak currently does not support deploying the cluster on existing machines. The only way is to provision clusters if you deploy the whole cluster end to end. Delete those vm's and please start a new cluster.

Br,

R