Created on 02-07-2019 04:54 PM
Cloudbreak 2.9.0 is now available! It is a general availability (GA) release, so - with an exception of some features that are marked as TP - it is suitable for production.
Quickly deploy by using quickstart on AWS, Azure, or Google Cloud
Install manually on AWS, Azure, Google Cloud, or OpenStack
Cloudbreak 2.9.0 introduces the following new features. While some of these features were introduced in Cloudbreak 2.8.0 TP, others are brand new:
Feature | Description | Documentation |
Specifying resource group name on Azure | When creating a cluster on Azure, you can specify the name for the new resource group where the cluster will be deployed. | Resource group name |
Multiple existing security groups on AWS | When creating a cluster on AWS, you can select multiple existing security groups. This option is available only when an existing VPC is selected. | Create a cluster on AWS |
EBS volume encryption on AWS | You can optionally configure encryption for EBS volumes attached to cluster instances running on EC2. Default or customer-managed encryption keys can be used. | EBS encryption on AWS |
Shared VPCs on GCP | When creating a cluster on Google Cloud, you can place it in an existing shared VPC. | Shared networks on GCP |
GCP volume encryption | By default, Google Compute Engine encrypts data at rest stored on disks. You can optionally configure encryption for the encryption keys used for disk encryption. Customer-supplied (CSEK) or customer-managed (CMEK) encryption keys can be used. | Disk encryption on GCP |
Workspaces | Cloudbreak introduces a new authorization model, which allows resource sharing via workspaces. In addition to a default personal workspace, users can create additional shared workspaces. | Workspaces |
Operations audit logging | Cloudbreak records an audit trail of the actions performed by Cloudbreak users as well as those performed by the Cloudbreak application. | Operations audit logging |
Updating long-running clusters | Cloudbreak supports updating base image's operating system and any third party packages that have been installed, as well as upgrading Ambari, HDP and HDF. | Updating OS and tools on long-running clusters and Updating Ambari and HDP/HDF on long-running clusters |
HDP 3.1 | Cloudbreak introduces two default HDP 3.1 blueprints and allows you to create your custom HDP 3.1 blueprints. | Default cluster configurations |
HDF 3.3 | Cloudbreak introduces two default HDF 3.3 blueprints and allows you to create your custom HDP 3.3 blueprints. To get started, refer to How to create a NiFi cluster HCC post. | Default cluster configurations |
Recipe parameters | Supported parameters can be specified in recipes as variables by using mustache kind of templating with "{{{ }}}" syntax. | Writing recipes and Recipe parameters |
Shebang in Python recipes | Cloudbreak supports using shebang in Python scripts run as recipes. | Writing recipes |
The following features are technical preview (not suitable for production):
Feature | Description | Documentation |
AWS GovCloud (TP) | You can install Cloudbreak and create Cloudbreak-managed clusters on AWS GovCloud. | Deploying on AWS GovCloud |
Azure ADLS Gen2 (TP) | When creating a cluster on Azure, you can optionally configure access to ADLS Gen2. This feature is technical preview. | Configuring access to ADLS Gen2 |
New and changed data lake blueprints (TP) | Cloudbreak includes three data lake blueprints, two for HDP 2.6 (HA and Atlas) and one for HDP 3.1. Note that Hive Metastore has been removed from the HDP 3.x data lake blueprints, but setting up an external database allows all clusters attached to a data lake to connect to the same Hive Metastore. To get started with data lakes, refer to How to create a data lake with Cloudbreak 2.9 HCC post. | Working with data lakes |
Cloudbreak 2.9.0 includes the following HDP 2.6, HDP 3.1, and HDF 3.3 workload cluster blueprints. In addition, HDP 3.1 and HDP 2.6 data lake blueprints are available as technical preview. Note that Hive Metastore has been removed from the HDP 3.x data lake blueprints, but setting up an external database allows all clusters attached to a data lake to connect to the same Hive Metastore.
How to create a data lake with Cloudbreak 2.9 (HCC post)
How to create a NiFi cluster (HCC post)
Cloudbreak 2.9.0 documentation (Official docs)
Release notes (Official docs)