Community Articles

Find and share helpful community-sourced technical articles.
Announcements
Celebrating as our community reaches 100,000 members! Thank you!
Labels (1)
avatar

This is the continuation of article Part-1 provisioning HDP/HDF cluster on google cloud. Now that we have Google credentials created, we can provision the HDP/HDF cluster. Lets first start with HDP cluster.

  • Login to Cloudbreak UI and click on create cluster which will open the create cluster wizard with both basic and advanced options. On the general configuration page Select the previously created Google credentials, Enter name of the cluster , Select region as shown below, Select either HDP or HDF version. For cluster type select the appropriate cluster blueprint based on your requirements.

68394-gcphdpconfig.png68395-gcphdpconfig2.png

The available blueprint option in cloudbreak 2.5 tech preview are shown below.

68403-gcpblueprints.png

  • Next is configuring the Hardware and Storage piece. Select the Google VM instance type from the dropdown. Enter number of instances for each group. You must select one node for ambari server for one of the host group for which the Group Size should be set to "1".

68398-gcphwstorage-1.png

  • Next is setup the Network group. You can select the existing network or you have option to create new network.

68399-gcpnetworkconfig.png

  • On the Security config page provide the cluster admin username and password. Select the new ssh key public key option or the existing ssh public key option. You will use the matching private key to access your nodes via SSH.

68402-gcphdfsecurity.png

Finally you will hit create cluster which will redirect you to cloudbreak dashboard. The following left image shows the cluster creation in progress and right image shows the successfully creation of HDP cluster on Google cloud.

68400-gcphdpinprogress.png68401-gcphdpclusterready.png

Once successful deploying the HDP cluster you can login to HDP nodes using your ssh private key with choice of your tool. Following image shows the node login using google cloud browser option.

68404-gcpshell.png

Similarly you can provision the HDF (NiFi: Flow management ) cluster using cloudbreak which is included as part of 2.5 tech preview.

Following are some key screenshots for the reference. The Network, Storage and security configuration is similar as we have seen in HDP section earlier.

68405-gcphdfcreatecluster.png68408-gcphdfcreated.png

With limitation with my google cloud account subscription I ran into the exception while creating HDF cluster which was rightly shown on cloudbreak. I had to select different region to resolve it.

68410-gcphdfexception2.png

The nifi cluster got created successfully as shown below.

68409-gcphdfcluster.png

Conclusion:

Cloudbreak can provide you the easy button to provision and monitor the connected data platform (HDP and HDF) in the cloud vendor of your choice to build the modern data applications.


gcphdfinprogress2.pnggcphdfinprogress.pnggcphdfstorage.png
1,224 Views
Version history
Last update:
‎08-17-2019 07:47 AM
Updated by:
Contributors