Cloudera Labs
Provide feedback on Cloudera Labs

[ANNOUNCE] Third installment of Cloudera Labs packaging of Apache Phoenix - Phoenix 4.7.0 on CDH5.7

Expert Contributor

Cloudera is pleased to announce the third Cloudera Labs packaging of the Apache Phoenix project. This packaging is based on Phoenix's upstream 4.7.0 release and is expected to work with CDH5.7.

 

Interested users should follow the directions for installing Phoenix from the original announcement for a Cloudera Labs packaging. For step 1c, when adding a Remote Parcel Repository URL you should use the one for Cloudera Labs packaging 1.3:

 

http://archive.cloudera.com/cloudera-labs/phoenix/parcels/1.3/

 

The remainder of the steps should be the same.

 

Please note that Cloudera Labs is used to measure interest in projects. While they are packaged and tested to work with CDH, they are not officially supported. Additionally, note that like prior Cloudera Labs packaging, there is no guarantee of upgrade compatibility from previous Cloudera Labs packaged Phoenix versions.

 

Special thanks to Andrew Purtell's "Phoenix for Cloudera" project for compatibility work leveraged in this packaging.

17 REPLIES 17

Explorer

Resolved....

Ok So I did the pure parcel based (Recommended) installation CDH 5.8.1(Latest till now) and Phoenix worked properly with HBase 1.2.x.

 

Thanks!!!!

New Contributor

In a java program, I'm using the jdbc driver from the phoenix-4.7.0-clabs-phoenix1.3.0-client.jar from this latest Phoenix parcel, and I'm seeing a memory issue as I read an increasing number of records. My program is looking up single rows using the primary key from a Phoenix table with about 90 columns, and I'm testing the performance of those selects. I noticed that memory continued to grow over time until I had read a total of 50,000 records (a select is executed for each record), and then I can get:

Exception in thread "Thread-12" java.lang.OutOfMemoryError: GC overhead limit exceeded

I shouldn't have a leak in my program as I am using a single prepared statement on which I change the value for the primary key each time I execute it. Has this behavior of increasing memory been observed and if so, will this be addressed?

New Contributor

 Hi mkanchwala

 

aim trying to install this in CDH 5.7.5 cluster with apached tar files but getting below error .

 

Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): org.apache.hadoop.hbase.DoNotRetryIOException: SYSTEM.CATALOG: org.apache.hadoop.hbase.client.Scan.setRaw(Z)Lorg/apache/hadoop/hbase/client/Scan;

 

can you guide me any documentation to install using parcels in CDH5.7 version??

 

Appreciate your help on this.

 

Thanks

 

New Contributor

This new Apache Phoenix 4.7.0 packaging works also on CDH 5.7.1 or CDH 5.7.2?  Or it works only on CDH 5.7.0?

Expert Contributor

It is expected to work with CDH5.7.0+, barring internal changes in CDH that break the assumptions phoenix has about HBase. At the moment, that should mean working with CDH5.7 and CDH5.8. The testing done for the release was against CDH5.7.0, specifically.

Contributor

Hi - We find that CLAB Phoenix 4.7.0+phoenix1.3.0+0 doesn't seem to fully support CDH 5.7.1 components - it seems to have been built on Hadoop 2.5.1 and Spark 1.5 (<-> 2.6.0 and 1.6.0). 

 

One of the errors encountered by my colleague:

 

Exception in thread "main" java.lang.NoSuchMethodError: com.fasterxml.jackson.databind.Module$SetupContext.setClassIntrospector(Lcom/fasterxml/jackson/databind/introspect/ClassIntrospector;)V

                at com.fasterxml.jackson.module.scala.introspect.ScalaClassIntrospectorModule$$anonfun$1.apply(ScalaClassIntrospector.scala:32)

                at com.fasterxml.jackson.module.scala.introspect.ScalaClassIntrospectorModule$$anonfun$1.apply(ScalaClassIntrospector.scala:32)

                at com.fasterxml.jackson.module.scala.JacksonModule$$anonfun$setupModule$1.apply(JacksonModule.scala:47)

                at com.fasterxml.jackson.module.scala.JacksonModule$$anonfun$setupModule$1.apply(JacksonModule.scala:47)

                at scala.collection.immutable.List.foreach(List.scala:318)

                at com.fasterxml.jackson.module.scala.JacksonModule$class.setupModule(JacksonModule.scala:47)

                at com.fasterxml.jackson.module.scala.DefaultScalaModule.setupModule(DefaultScalaModule.scala:18)

                at com.fasterxml.jackson.databind.ObjectMapper.registerModule(ObjectMapper.java:525)

                at org.apache.spark.rdd.RDDOperationScope$.<init>(RDDOperationScope.scala:81)

                at org.apache.spark.rdd.RDDOperationScope$.<clinit>(RDDOperationScope.scala)

                at org.apache.spark.SparkContext.withScope(SparkContext.scala:725)

                at org.apache.spark.SparkContext.newAPIHadoopRDD(SparkContext.scala:1140)

                at org.apache.spark.api.java.JavaSparkContext.newAPIHadoopRDD(JavaSparkContext.scala:507)

 

Documentation on the main Phoenix site seems partially out of date and inconsistent (release note still at 4.5.0).  On the other hand, the CLAB Phoenix GitHub has not been updated over an year.  Can you provide more technical details and roadmap for Phoenix at Cloudera, especially its relationship with current/future HBase and Spark releases?

 

Thanks,

Miles

Contributor

Updates on CDH-compatibility effort in both Apache and Cloudera-labs are tracked in this thread.

 

New Contributor

I installed Phoenix 4.7.0 on our CDH 5.7.5 cluster using the parcel. Everything works fine except that i can't populate the secondary index using the Hbase IndexTool. Looks like this is a known issue here. And the issue is known to be fixed on version 4.8.0. I am wondering when cloudera will release a parcels for phoenix for version 4.8.0? 

 

Thanks so much!

Shumin