We are pleased to announce the general availability of Cloudera Enterprise 6.3.0, the world’s leading data platform for machine learning and analytics, optimized for the cloud.
This release delivers a number of new capabilities, improved usability, better performance, and support for more modern Java and identity management infrastructure software.
New capabilities include:
Support for the OpenJDK 11 runtime: all components and tools in Cloudera Enterprise 6 support both the JDK8 and JDK11 Java Virtual Machine (JVM).
Updated versions of platform components, including packaging & upgrade support for the following new Apache project versions: Kafka 2.2.1, HBase 2.1.4, Impala 3.2.0 and Kudu 1.10.0.
Support for Free IPA and Red Hat IPA Manager: Cloudera Manager & CDH now supports the use of FeeIPA/RedHat IDM as the Kerberos KDC provider for CDH, and for LDAP authentication to Cloudera Enterprise
Support for zstd compression with Parquet files, a fast real-time compression algorithm offering an improved speed to compression ratio as well as fast decompression speeds. Both Impala and Spark have been certified with zstd and Parquet.
TLS certificate expiry monitoring and alerting: Cloudera Manager now alerts you 60 days before the Cloudera Manager Server’s TLS certificate expires, prompting you to rotate (re-generate) the certificates used by Cloudera Manager’s push-button wire encryption system (‘AutoTLS’)
Network Performance Inspector now includes a bandwidth test, for verifying sufficient network performance between independent compute and storage clusters
Kafka support in Compute Clusters, Independently managed Kafka ‘compute’ clusters can now share a single Sentry in a base CDH cluster for common authorization across all services
Auditing in Virtual Private Clusters. Cloudera Navigator now extracts audit events from all relevant activity within Compute Clusters in addition to collecting audit events upon creation of Compute Clusters and Data Contexts. No lineage or metadata is extracted from services running on Compute clusters. The new behavior is described in detail in Virtual Private Clusters and Cloudera SDX.
Search, query, access highlights:
Data Cache for Remote Reads (preview feature, off by default): To improve performance on environments with separate storage & compute clusters as well as on object store environments, Impala now caches data for non-local reads (e.g. S3, ABFS, ADLS) on local storage. See Impala Remote Data Cache for the information and steps to enable remote data cache.
Automatic Invalidate/Refresh Metadata (preview feature, off by default): When other CDH services update the Hive metastore, Impala users no longer have to issue INVALIDATE/REFRESH in a number of scenarios. See Impala Metadata Management for the information and steps to enable the Zero Touch Metadata feature.
Support for Kudu integrated with Hive Metastore, metadata for Kudu tables can now be managed via HMS and shared between Impala and Spark.See Using the Hive Metastore with Kudu for upgrading existing tables.
Kudu supports both full and incremental table backups, via a job implemented using Apache Spark. Additionally, it supports restoring tables from full and incremental backups via a restore job implemented using Apache Spark. Kudu can use HDFS, S3 and any Spark-compatible destination to store backups. See the backup documentation for more details.
Kudu’s web UI now supports SPNEGO, a protocol for securing HTTP requests with Kerberos by passing negotiation through the HTTP headers. To enable authorization using SPNEGO, set the --webserver_require_spnego command line flag.
Query Profile, output enhanced for better monitoring and troubleshooting of query performance. See Impala Query Profile for generating and reading query profile.
Kudu now supports native, fine-grained authorization via integration with Apache Sentry, Kudu can now enforce role-based access control policies defined in Sentry. When this feature is turned on, access control is enforced for all clients accessing Kudu, including Impala, Spark and native Kudu clients. See the authorization documentation for more details.