Reply
New Contributor
Posts: 1
Registered: ‎08-04-2016

Re: [ANNOUNCE] Third installment of Cloudera Labs packaging of Apache Phoenix - Phoenix 4.7.0 on CDH

This new Apache Phoenix 4.7.0 packaging works also on CDH 5.7.1 or CDH 5.7.2?  Or it works only on CDH 5.7.0?

Cloudera Employee
Posts: 88
Registered: ‎01-08-2014

Re: [ANNOUNCE] Third installment of Cloudera Labs packaging of Apache Phoenix - Phoenix 4.7.0 on CDH

It is expected to work with CDH5.7.0+, barring internal changes in CDH that break the assumptions phoenix has about HBase. At the moment, that should mean working with CDH5.7 and CDH5.8. The testing done for the release was against CDH5.7.0, specifically.

Expert Contributor
Posts: 63
Registered: ‎03-04-2015

Re: [ANNOUNCE] Third installment of Cloudera Labs packaging of Apache Phoenix - Phoenix 4.7.0 on CDH

Hi - We find that CLAB Phoenix 4.7.0+phoenix1.3.0+0 doesn't seem to fully support CDH 5.7.1 components - it seems to have been built on Hadoop 2.5.1 and Spark 1.5 (<-> 2.6.0 and 1.6.0). 

 

One of the errors encountered by my colleague:

 

Exception in thread "main" java.lang.NoSuchMethodError: com.fasterxml.jackson.databind.Module$SetupContext.setClassIntrospector(Lcom/fasterxml/jackson/databind/introspect/ClassIntrospector;)V

                at com.fasterxml.jackson.module.scala.introspect.ScalaClassIntrospectorModule$$anonfun$1.apply(ScalaClassIntrospector.scala:32)

                at com.fasterxml.jackson.module.scala.introspect.ScalaClassIntrospectorModule$$anonfun$1.apply(ScalaClassIntrospector.scala:32)

                at com.fasterxml.jackson.module.scala.JacksonModule$$anonfun$setupModule$1.apply(JacksonModule.scala:47)

                at com.fasterxml.jackson.module.scala.JacksonModule$$anonfun$setupModule$1.apply(JacksonModule.scala:47)

                at scala.collection.immutable.List.foreach(List.scala:318)

                at com.fasterxml.jackson.module.scala.JacksonModule$class.setupModule(JacksonModule.scala:47)

                at com.fasterxml.jackson.module.scala.DefaultScalaModule.setupModule(DefaultScalaModule.scala:18)

                at com.fasterxml.jackson.databind.ObjectMapper.registerModule(ObjectMapper.java:525)

                at org.apache.spark.rdd.RDDOperationScope$.<init>(RDDOperationScope.scala:81)

                at org.apache.spark.rdd.RDDOperationScope$.<clinit>(RDDOperationScope.scala)

                at org.apache.spark.SparkContext.withScope(SparkContext.scala:725)

                at org.apache.spark.SparkContext.newAPIHadoopRDD(SparkContext.scala:1140)

                at org.apache.spark.api.java.JavaSparkContext.newAPIHadoopRDD(JavaSparkContext.scala:507)

 

Documentation on the main Phoenix site seems partially out of date and inconsistent (release note still at 4.5.0).  On the other hand, the CLAB Phoenix GitHub has not been updated over an year.  Can you provide more technical details and roadmap for Phoenix at Cloudera, especially its relationship with current/future HBase and Spark releases?

 

Thanks,

Miles

Explorer
Posts: 12
Registered: ‎05-25-2016

Re: [ANNOUNCE] Third installment of Cloudera Labs packaging of Apache Phoenix - Phoenix 4.7.0 on CDH

Resolved....

Ok So I did the pure parcel based (Recommended) installation CDH 5.8.1(Latest till now) and Phoenix worked properly with HBase 1.2.x.

 

Thanks!!!!

New Contributor
Posts: 5
Registered: ‎08-31-2016

Re: [ANNOUNCE] Third installment of Cloudera Labs packaging of Apache Phoenix - Phoenix 4.7.0 on CDH

In a java program, I'm using the jdbc driver from the phoenix-4.7.0-clabs-phoenix1.3.0-client.jar from this latest Phoenix parcel, and I'm seeing a memory issue as I read an increasing number of records. My program is looking up single rows using the primary key from a Phoenix table with about 90 columns, and I'm testing the performance of those selects. I noticed that memory continued to grow over time until I had read a total of 50,000 records (a select is executed for each record), and then I can get:

Exception in thread "Thread-12" java.lang.OutOfMemoryError: GC overhead limit exceeded

I shouldn't have a leak in my program as I am using a single prepared statement on which I change the value for the primary key each time I execute it. Has this behavior of increasing memory been observed and if so, will this be addressed?

New Contributor
Posts: 1
Registered: ‎04-03-2015

Re: [ANNOUNCE] Third installment of Cloudera Labs packaging of Apache Phoenix - Phoenix 4.7.0 on CDH

I installed Phoenix 4.7.0 on our CDH 5.7.5 cluster using the parcel. Everything works fine except that i can't populate the secondary index using the Hbase IndexTool. Looks like this is a known issue here. And the issue is known to be fixed on version 4.8.0. I am wondering when cloudera will release a parcels for phoenix for version 4.8.0? 

 

Thanks so much!

Shumin

Highlighted
New Contributor
Posts: 1
Registered: ‎02-02-2017

Re: [ANNOUNCE] Third installment of Cloudera Labs packaging of Apache Phoenix - Phoenix 4.7.0 on CDH

 Hi mkanchwala

 

aim trying to install this in CDH 5.7.5 cluster with apached tar files but getting below error .

 

Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): org.apache.hadoop.hbase.DoNotRetryIOException: SYSTEM.CATALOG: org.apache.hadoop.hbase.client.Scan.setRaw(Z)Lorg/apache/hadoop/hbase/client/Scan;

 

can you guide me any documentation to install using parcels in CDH5.7 version??

 

Appreciate your help on this.

 

Thanks

 

Announcements

Currently incubating in Cloudera Labs:

Envelope
HTrace
Ibis
Impyla
Livy
Oryx
Phoenix
Spark Runner for Beam SDK
Time Series for Spark
YCSB