- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Setting up 4 Node Cluster(1 Edge Node, 1 Master Node and 2 Slave Nodes) on Microsoft Azure using Cloudbreak
- Labels:
-
Apache Ambari
-
Hortonworks Cloudbreak
Created ‎02-02-2017 11:45 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am planning to install 4 Node Cluster(1 Edge Node, 1 Master Node and 2 Slave Nodes) on Microsoft Azure using cloudbreak.. I need components like HDFS, YARN, Spark, Hive, Kafka, Zookeeper, Ambari Server, Ambari Agent, Ranger, Flume, PostgreSQL to install on my 4 node cluster. And I am planning to follow below mentioned approach for setting up the cluster.
1) Install Cloudbreak on Edge Node first.
2) Create Ambari blueprints with below components.
2) Components to be installed on NameNode : HDFS_CLIENT, Resource Manager, Hive Server, Hive Metastore, Ambari Server, Oozie Server, Ooize Client, Kafka Broker, Zookeeper Server, Zookeeper Client, Spark Master, Flume Server, PostgreSQL, Spark Thrift Server, YARN_CLIENT, TEZ_CLIENT, Ranger
3) Slave Nodes: DataNode, Node Manager, Ambari Agent, Kafka Broker, Zookeeper Client, Zookeeper Server, Spark Worker, HDFS Client, Ranger
Kindly check if this is the correct approach and let me know in case i am missing something. And Kindly let me know the what all Hadoop clients need to be set up on Edge Node in above scenario?
Much appreciated.
Created ‎02-03-2017 05:52 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
That blueprint is invalid, because:
- it shall not contain input{...} section
- You should not add hosts to it, since Cloudbreak will fill the host section automatically and post it not in blueprint, but in a separate cluster creation template
Please find the updated blueprint attached. I tested it and it worked for me: fixed-hcc-blueprintjson.txt
Created ‎02-02-2017 11:54 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Could you please suggest something?
Created ‎02-02-2017 12:47 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
hi @rahul gulati,
For the first look it seems quite ok, although there is one thing what I noticed that MYSQL_SERVER is missing for Hive metastore from the 2.) hostgroup.
By default Ambari can install MySQL for default db for hive metastore. For your reference I have attached the available services for HDP 2.5. But if you have a running ambari you can get from api/v1/stacks/HDP/versions/2.5/services?fields=components/StackServiceComponents
Attila
Created ‎02-02-2017 02:38 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks Attila for the information. I have made the blueprint based on scenario mentioned above. Could you please check and let me know if i missed anything. I am attaching the same.blueprintjson.txt
Created ‎02-03-2017 05:52 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
That blueprint is invalid, because:
- it shall not contain input{...} section
- You should not add hosts to it, since Cloudbreak will fill the host section automatically and post it not in blueprint, but in a separate cluster creation template
Please find the updated blueprint attached. I tested it and it worked for me: fixed-hcc-blueprintjson.txt
Created ‎02-14-2017 05:24 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I got to know yesterday that cloudbreak cannot install hadoop clusters into already running VM's. is that correct? you mentioned that we donot need to provide fqdn in blueprint file. Rather it should be present in cluster creation file. But in cloudbreak UI i donot see any way to upload cluster creation template? Could you please suggest. We are stuck as we already have 4 VM's running in azure but we donot know how to use cloudbreak to install hadoop in those VM's?
Much appreciated.
Thanks
Rahul
Created ‎02-14-2017 07:56 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Unfortunately cloudbreak currently does not support deploying the cluster on existing machines. The only way is to provision clusters if you deploy the whole cluster end to end. Delete those vm's and please start a new cluster.
Br,
R
