08-04-2016 01:41 PM
This new Apache Phoenix 4.7.0 packaging works also on CDH 5.7.1 or CDH 5.7.2? Or it works only on CDH 5.7.0?
08-04-2016 01:48 PM
It is expected to work with CDH5.7.0+, barring internal changes in CDH that break the assumptions phoenix has about HBase. At the moment, that should mean working with CDH5.7 and CDH5.8. The testing done for the release was against CDH5.7.0, specifically.
08-08-2016 03:11 PM
Hi - We find that CLAB Phoenix 4.7.0+phoenix1.3.0+0 doesn't seem to fully support CDH 5.7.1 components - it seems to have been built on Hadoop 2.5.1 and Spark 1.5 (<-> 2.6.0 and 1.6.0).
One of the errors encountered by my colleague:
Exception in thread "main" java.lang.NoSuchMethodError: com.fasterxml.jackson.databind.Module$SetupContext.setClassIntrospector(Lcom/fasterxml/jackson/databind/introspect/ClassIntrospector;)V
Documentation on the main Phoenix site seems partially out of date and inconsistent (release note still at 4.5.0). On the other hand, the CLAB Phoenix GitHub has not been updated over an year. Can you provide more technical details and roadmap for Phoenix at Cloudera, especially its relationship with current/future HBase and Spark releases?
08-10-2016 12:49 AM
Ok So I did the pure parcel based (Recommended) installation CDH 5.8.1(Latest till now) and Phoenix worked properly with HBase 1.2.x.
09-16-2016 09:28 AM
In a java program, I'm using the jdbc driver from the phoenix-4.7.0-clabs-phoenix1.3.0-client.jar from this latest Phoenix parcel, and I'm seeing a memory issue as I read an increasing number of records. My program is looking up single rows using the primary key from a Phoenix table with about 90 columns, and I'm testing the performance of those selects. I noticed that memory continued to grow over time until I had read a total of 50,000 records (a select is executed for each record), and then I can get:
Exception in thread "Thread-12" java.lang.OutOfMemoryError: GC overhead limit exceeded
I shouldn't have a leak in my program as I am using a single prepared statement on which I change the value for the primary key each time I execute it. Has this behavior of increasing memory been observed and if so, will this be addressed?
01-08-2017 08:02 PM
I installed Phoenix 4.7.0 on our CDH 5.7.5 cluster using the parcel. Everything works fine except that i can't populate the secondary index using the Hbase IndexTool. Looks like this is a known issue here. And the issue is known to be fixed on version 4.8.0. I am wondering when cloudera will release a parcels for phoenix for version 4.8.0?
Thanks so much!
04-18-2017 09:49 AM
aim trying to install this in CDH 5.7.5 cluster with apached tar files but getting below error .
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): org.apache.hadoop.hbase.DoNotRetryIOException: SYSTEM.CATALOG: org.apache.hadoop.hbase.client.Scan.setRaw(Z)Lorg/apache/hadoop/hbase/client/Scan;
can you guide me any documentation to install using parcels in CDH5.7 version??
Appreciate your help on this.
Currently incubating in Cloudera Labs:Envelope