Created on 04-04-201809:24 PM - edited 08-17-201908:06 AM
Cloudbreak 2.5.0 Technical Preview is available now. Here are the highlights: Creating HDF Clusters
You can use Cloudbreak to create HDF clusters from base images on AWS, Azure, Google Cloud and OpenStack. In the Cloudbreak web UI, you can do this by selecting "HDF 3.1" under Platform Version and then selecting an HDF blueprint.
Cloudbreak includes one default HDF blueprint "Flow Management: Apache NiFi" and supports uploading your own custom HDF 3.1.1 NiFi blueprints.
Note the following when creating NiFi clusters:
When creating a cluster, open 9091 TCP port on the NiFi host group. Without it, you will be unable to access the UI.
Enabling kerberos is mandatory. You can either use your own kerberos or select for Cloudbreak to create a test KDC.
Although Cloudbreak includes cluster scaling (including autoscaling), scaling is not fully supported by NiFi. Downscaling NiFi clusters is not supported - as it can result in data loss when a node is removed that has not yet processed all the data on that node. There is also a known issue related to scaling listed in the Known Issues below.
For a tutorial on creating a NiFi cluster with Cloudbreak, refer to the following HCC post.
> HDF options in the create cluster wizard:
Using External Databases for Cluster Services
You can register an existing external RDBMS in the Cloudbreak UI or CLI so that it can be used for those cluster components which have support for it. After the RDBMS has been registered with Cloudbreak, it will be available during the cluster create and can be reused with multiple clusters.
Only Postgres is supported at this time. Refer to component-specific documentation for information on which version of Postgres (if any) is supported.
> UI for selecting a previously registered DB to be attached to a specific cluster:
Using External Authentication Sources (LDAP/AD) for Clusters
You can configure an existing LDAP/AD authentication source in the Cloudbreak UI or CLI so that it can later be associated with one or more Cloudbreak-managed clusters. After the authentication source has been registered with Cloudbreak, it will be available during the cluster create and can be reused with multiple clusters.
> UI for registering an existing LDAP/AD with Cloudbreak:
> UI for selecting a previously registered authentication source to be attached to a specific cluster:
Modifying Existing Cloudbreak Credentials
Cloudbreak allows you to modify existing credentials by using the edit option available in Cloudbreak UI or by using the credential modify command in the CLI. For more information, refer to Modify an Existing Credential.
When using Cloudbreak on OpenStack, you no longer need to import HDP and HDF images manually, because during your first attempt to create a cluster, Cloudbreak automatically imports HDP and HDF images to your OpenStack. Only Cloudbreak image must be imported manually.