Member since
04-27-2016
218
Posts
133
Kudos Received
25
Solutions
01-22-2019
04:45 PM
Ananya, script was updated long back to take care of this. You should able to use existing vpc and subnet. Only issue you might face if internet gateway is already attached to vpc as script prefers to add new internet gateway.
... View more
04-09-2018
08:52 PM
5 Kudos
This is the continuation of article Part-1 provisioning HDP/HDF cluster on google cloud. Now that we have Google credentials created, we can provision the HDP/HDF cluster. Lets first start with HDP cluster. Login to Cloudbreak UI and click on create cluster which will open the create cluster wizard with both basic and advanced options. On the general configuration page Select the previously created Google credentials, Enter name of the cluster , Select region as shown below, Select either HDP or HDF version. For cluster type select the appropriate cluster blueprint based on your requirements. The available blueprint option in cloudbreak 2.5 tech preview are shown below. Next is configuring the Hardware and Storage piece. Select the Google VM instance type from the dropdown. Enter number of instances for each group. You must select one node for ambari server for one of the host group for which the Group Size should be set to "1". Next is setup the Network group. You can select the existing network or you have option to create new network. On the Security config page provide the cluster admin username and password. Select the new ssh key public key option or the existing ssh public key option. You will use the matching private key to access your nodes via SSH. Finally you will hit create cluster which will redirect you to cloudbreak dashboard. The following left image shows the cluster creation in progress and right image shows the successfully creation of HDP cluster on Google cloud. Once successful deploying the HDP cluster you can login to HDP nodes using your ssh private key with choice of your tool. Following image shows the node login using google cloud browser option. Similarly you can provision the HDF (NiFi: Flow management ) cluster using cloudbreak which is included as part of 2.5 tech preview. Following are some key screenshots for the reference. The Network, Storage and security configuration is similar as we have seen in HDP section earlier. With limitation with my google cloud account subscription I ran into the exception while creating HDF cluster which was rightly shown on cloudbreak. I had to select different region to resolve it. The nifi cluster got created successfully as shown below. Conclusion: Cloudbreak can provide you the easy button to provision and monitor the connected data platform (HDP and HDF) in the cloud vendor of your choice to build the modern data applications.
... View more
Labels:
04-09-2018
05:42 PM
5 Kudos
Cloudbreak Overview Overview Cloudbreak enables enterprises to provision Hortonworks
platforms in Public (AWS + GCP + Azure) and Private (OpenStack) cloud
environments. It simplifies the provisioning, management, and monitoring of
on-demand HDP and HDF clusters in virtual and cloud environments. Following are primary use cases for Cloudbreak:
Dynamically configure and manage
clusters on public or private clouds. Seamlessly manage elasticity
requirements as cluster workloads change Supports configuration defining
network boundaries and configuring security groups. This article focuses on deploying HDP and HDF cluster on Google
Cloud. Cloudbreak Benefits You can spin up connected data platform (HDP and HDF clusters)
on choice of your cloud vendor using open source Cloudbreak 2.0 which address
the following scenarios.
Defining the comprehensive
Data Strategy irrespective of deployment architecture (cloud or on premise). Addressing the Hybrid (on-premise
& cloud) requirements. Supporting the key Multi-cloud
approach requirements. Consistent and familiar
security and governance across on-premise and cloud environments. Cloudbreak 2 Enhancements Recently Hortonworks announced the general Availability of the
Cloudbreak 2.4 release. Following are some of the major enhancements in the
Cloudbreak 2.4:
New UX / UI: a greatly simplified and streamlined user
experience. New CLI: a new CLI that eases automation, an important
capability for cloud DevOps.
Custom Images: advanced support for “bring your own image”, a
critical feature to meet enterprise infrastructure requirements.
Kerberos: ability to enable Kerberos security on your
clusters, must for any enterprise deployment. You can check the following HCC article for detail overview
of Cloudbreak 2.4 https://community.hortonworks.com/articles/174532/overview-of-cloudbreak-240.html Also check the following article for the Cloudbreak 2.5 tech
preview details. https://community.hortonworks.com/content/kbentry/182293/whats-new-in-cloudbreak-250-tp.html Prerequisites for
Google Cloud Platform. Article assumes that you have already installed and launch
the Cloudbreak instance either on your own custom VM image or on Google Cloud
Platform. You can follow the Cloudbreak documentation which describes
both the options. https://docs.hortonworks.com/HDPDocuments/Cloudbreak/Cloudbreak-2.5.0/content/index.html https://docs.hortonworks.com/HDPDocuments/Cloudbreak/Cloudbreak-2.5.0/content/gcp-launch/index.html
In order to launch the Cloudbreak
and provision the clusters make sure you have the Google cloud account. You can
create one at https://console.cloud.google.com Create new project in GCP
(e.g. GCPIntegration project as shown below).
In order to launch the
clusters on GCP you must have service account that Cloudbreak can use. Assign
the admin roles for the Compute Engine and Storage. You can check the required service account admin roles at Admin Roles Make sure you create the P12 key and store it safely.
This article assumes that you have successfully meet the prereqs and able to launch the cloudbreak UI as shown left below by visiting https://<IP_Addr or HostName> and Upon successful login you are redirected to the dashboard which looks like the image on right. Create Cloudbreak Credential for GCP. First step before provisioning cluster is to create the Cloudbreak credential for GCP. Cloudbreak uses this GCP credentials to create the required resources on GCP. Following are steps to create GCP credential:
In Cloudbreak UI select credentials from Navigation pane and click create credentials. Under cloud provider select Google Cloud Platform.
As shown below provide the Google project id, Service Account email id from google project and upload the P12 key that you created the above section.
Once you provide all the right details , cloudbreak will create the GCP credential and that should be displayed in the Credential pane. Next article Part 2 covers in detail how to provision the HDP and HDF cluster using the GCP credential.
... View more
Labels:
12-15-2016
07:26 PM
@milind pandit Hello Milind, Could you please share .xsd and .json fille. Thanks, Rajeev
... View more
02-07-2017
02:13 AM
the bucket ofcourse created and I could access them s3 browser as well as s3 command line.
... View more
07-10-2017
05:11 PM
Is this article still valid for HDF version 3.0 which was released recently? Are there easier ways of deploying to Amazon?
... View more
06-01-2017
09:42 PM
@jeff Can you answer this? By the way, you get a better visibility by posting a question as a separate thread rather than commenting below an article.
... View more