Community Articles

Find and share helpful community-sourced technical articles.
Labels (1)
avatar

Cloudbreak 2.9.0 is now available! It is a general availability (GA) release, so - with an exception of some features that are marked as TP - it is suitable for production.

Try it now

Upgrade to 2.9.0

Quickly deploy by using quickstart on AWS, Azure, or Google Cloud

Install manually on AWS, Azure, Google Cloud, or OpenStack

New features

Cloudbreak 2.9.0 introduces the following new features. While some of these features were introduced in Cloudbreak 2.8.0 TP, others are brand new:

FeatureDescriptionDocumentation
Specifying resource group name on AzureWhen creating a cluster on Azure, you can specify the name for the new resource group where the cluster will be deployed. Resource group name
Multiple existing security groups on AWSWhen creating a cluster on AWS, you can select multiple existing security groups. This option is available only when an existing VPC is selected. Create a cluster on AWS
EBS volume encryption on AWSYou can optionally configure encryption for EBS volumes attached to cluster instances running on EC2. Default or customer-managed encryption keys can be used.EBS encryption on AWS
Shared VPCs on GCPWhen creating a cluster on Google Cloud, you can place it in an existing shared VPC.Shared networks on GCP
GCP volume encryptionBy default, Google Compute Engine encrypts data at rest stored on disks. You can optionally configure encryption for the encryption keys used for disk encryption. Customer-supplied (CSEK) or customer-managed (CMEK) encryption keys can be used.Disk encryption on GCP
WorkspacesCloudbreak introduces a new authorization model, which allows resource sharing via workspaces. In addition to a default personal workspace, users can create additional shared workspaces.Workspaces
Operations audit loggingCloudbreak records an audit trail of the actions performed by Cloudbreak users as well as those performed by the Cloudbreak application.Operations audit logging
Updating long-running clustersCloudbreak supports updating base image's operating system and any third party packages that have been installed, as well as upgrading Ambari, HDP and HDF.Updating OS and tools on long-running clusters and Updating Ambari and HDP/HDF on long-running clusters

HDP 3.1

Cloudbreak introduces two default HDP 3.1 blueprints and allows you to create your custom HDP 3.1 blueprints.Default cluster configurations
HDF 3.3Cloudbreak introduces two default HDF 3.3 blueprints and allows you to create your custom HDP 3.3 blueprints. To get started, refer to How to create a NiFi cluster HCC post.
Default cluster configurations
Recipe parametersSupported parameters can be specified in recipes as variables by using mustache kind of templating with "{{{ }}}" syntax.Writing recipes and Recipe parameters

Shebang in Python recipes

Cloudbreak supports using shebang in Python scripts run as recipes.Writing recipes

Technical preview features

The following features are technical preview (not suitable for production):

FeatureDescriptionDocumentation
AWS GovCloud (TP)You can install Cloudbreak and create Cloudbreak-managed clusters on AWS GovCloud.Deploying on AWS GovCloud
Azure ADLS Gen2 (TP)When creating a cluster on Azure, you can optionally configure access to ADLS Gen2. This feature is technical preview.Configuring access to ADLS Gen2
New and changed data lake blueprints (TP)Cloudbreak includes three data lake blueprints, two for HDP 2.6 (HA and Atlas) and one for HDP 3.1. Note that Hive Metastore has been removed from the HDP 3.x data lake blueprints, but setting up an external database allows all clusters attached to a data lake to connect to the same Hive Metastore. To get started with data lakes, refer to How to create a data lake with Cloudbreak 2.9 HCC post.
Working with data lakes

Default blueprints

Cloudbreak 2.9.0 includes the following HDP 2.6, HDP 3.1, and HDF 3.3 workload cluster blueprints. In addition, HDP 3.1 and HDP 2.6 data lake blueprints are available as technical preview. Note that Hive Metastore has been removed from the HDP 3.x data lake blueprints, but setting up an external database allows all clusters attached to a data lake to connect to the same Hive Metastore.

Documentation links

How to create a data lake with Cloudbreak 2.9 (HCC post)

How to create a NiFi cluster (HCC post)

Cloudbreak 2.9.0 documentation (Official docs)

Release notes (Official docs)

1,646 Views