We are pleased to announce the release of Cloudera Enterprise 5.3.10 (CDH 5.3.10, Cloudera Manager 5.3.10, and Cloudera Navigator 2.2.10).
This release fixes key bugs and includes the following:
CDH fixes for the following issues:
HDFS-9406 - When deleting a snapshot that contains the last record of a given INode, the fsimage may become corrupt because the create list of the snapshot diff in the previous snapshot and the child list of the parent INodeDirectory are not cleaned.
HADOOP-12464 - Interrupted client may try to fail over and retry
HADOOP-12604 - Exception may be swallowed in KMSClientProvider
HDFS-8211 - DataNode UUID is always null in the JMX counter
HDFS-9092 - NFS silently drops overlapping write requests and causes data copying to fail
HDFS-9358 - TestNodeCount#testNodeCount timed out
MAPREDUCE-6302 - Incorrect headroom can lead to a deadlock between map and reduce allocations
YARN-3464 - Race condition in LocalizerRunner kills localizer before localizing all resources
YARN-4235 - FairScheduler PrimaryGroup does not handle empty groups returned for a user
HBASE-11394 - AmendReplication can have data loss if peer ID contains hyphen
HBASE-12336 - RegionServer failed to shutdown for NodeFailoverWorker thread
HBASE-13437 - ThriftServer leaks ZooKeeper connections
HBASE-15019 - Replication stuck when HDFS is restarted
HIVE-8115 - Hive select query hangs when fields contain map
IMPALA-1702 - "invalidate metadata" can cause duplicate TableIds (issue not entirely fixed, but now fails gracefully)
IMPALA-3095 - Allow additional Kerberos users to be authorized to access internal APIs
SENTRY-835 - Drop table leaves a connection open when using metastorelistener
SENTRY-957 - Exceptions in MetastoreCacheInitializer should probably not prevent HMS from starting up
For a full list of upstream JIRAs fixed in CDH 5.3.10, see the issues fixed section of the Release Notes.
Cloudera Manager fixes for the following issues:
Scheme and location not filled in consistently during Hive replication import. In previous releases, Hive replication import phase did not consistently fill in scheme and location information. This information is now filled in as expected.
YARN jobs fail after enabling Kerberos authentication or selecting Always Use Container Executor. After Kerberos security is enabled on a cluster or Always Use Container Executor is selected, YARN jobs failed. This occurred because the contents of any previously existing YARN User Cache directory could not be overridden after security was enabled. YARN jobs now complete as expected after a change in Kerberos security or usage of Container Executor.
For full list of issues fixed in Cloudera Manager 5.3.10, see the issues fixed section of the Release Notes.
We look forward to you trying it, using the information below:
As always, we are happy to hear your feedback. Please send your comments and suggestions to the user group or through our community forums. You can also file bugs through our external JIRA projects on issues.cloudera.org.