Reply
New Contributor
Posts: 1
Registered: ‎04-28-2017

Upgrade to Hadoop 2.8

When will CDH incorporate Apache Hadoop 2.8?

Cloudera Employee
Posts: 39
Registered: ‎10-07-2016

Re: Upgrade to Hadoop 2.8

Howdy,

 

Thanks for reaching out on this. Currently, we're on Hadoop 2.6, and 2.8 is Slated as "TBD". We don't rebase very often on minor versions because of all the changes it makes, and opt instead to backport features into CDH, which leads me to my next question. Is there a particular feature in Hadoop 2.8 you're looking to use? 

 

Let me know when you can.

 

Cheers,

Josh

New Contributor
Posts: 1
Registered: ‎04-28-2017

Re: Upgrade to Hadoop 2.8

Yes, I'm curious when the settings described in this JIRA will be available. Based on the latest release notes, HADOOP-12437 is not in CDH yet. This would be ideal for those of us using multihomed appliances.

https://issues.apache.org/jira/browse/HADOOP-12437

 

 

New Contributor
Posts: 1
Registered: ‎11-17-2017

Re: Upgrade to Hadoop 2.8

I would like the ability to use Java 9.

Java 7/8 have this 64kb mehtod limit https://dzone.com/articles/method-size-limit-java

This limit restricts the number of variables I can use, say in a linear model in mllib, to 500-2000, depending on how long my column names are.

Explorer
Posts: 16
Registered: ‎11-13-2014

Re: Upgrade to Hadoop 2.8

Hi,

 

Is there any timeline for HDFS-11047? ( https://issues.apache.org/jira/browse/HDFS-11047

 

Based on heap usage observations, we suspect this is an issue currently affecting all CDH versions of Hadoop.  It's affecting us greatly, so we need to know if it is feasible to stay with CDH or not. 

 

Thanks!

Posts: 1,673
Kudos: 329
Solutions: 263
Registered: ‎07-31-2013

Re: Upgrade to Hadoop 2.8


@jabberwockkiewrote:

Yes, I'm curious when the settings described in this JIRA will be available. Based on the latest release notes, HADOOP-12437 is not in CDH yet. This would be ideal for those of us using multihomed appliances.

https://issues.apache.org/jira/browse/HADOOP-12437

 

 


While this may arrive in C6 or forward, I wanted point out that Cloudera currently (as of C5) does not cover support for multi-homed networks with a few specific exclusions that have been intensively tested: https://www.cloudera.com/documentation/enterprise/release-notes/topics/rn_consolidated_pcm.html#cdh_...

Posts: 1,673
Kudos: 329
Solutions: 263
Registered: ‎07-31-2013

Re: Upgrade to Hadoop 2.8


@axiomwrote:

I would like the ability to use Java 9.

Java 7/8 have this 64kb mehtod limit https://dzone.com/articles/method-size-limit-java

This limit restricts the number of variables I can use, say in a linear model in mllib, to 500-2000, depending on how long my column names are.


JDK9 support is not currently planned for C5. Worth remembering that JDK9 follows the new release policy of Oracle, and will reach EOL for updates as of March 2018: http://www.oracle.com/technetwork/java/eol-135779.html#Interfaces. This makes it unfeasible to support (as a server runtime, but you may try and use it for clients for leveraging new language features). Same limited lifetime applies to JDK10.

Posts: 1,673
Kudos: 329
Solutions: 263
Registered: ‎07-31-2013

Re: Upgrade to Hadoop 2.8


@Chewlockawrote:

Hi,

 

Is there any timeline for HDFS-11047? ( https://issues.apache.org/jira/browse/HDFS-11047

 

Based on heap usage observations, we suspect this is an issue currently affecting all CDH versions of Hadoop.  It's affecting us greatly, so we need to know if it is feasible to stay with CDH or not. 

 

Thanks!


Could you post your DataNode heap investigations over a separate topic under the Storage board, to help Engg. investigate this report? We do have a number of customers running with a lot of blocks on their DNs but their DNs do not appear to OOM crash (which I think is implied in your post). Or if you have access to Support, please log a case.

Announcements