Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Install Apache Phoenix on CHD5.7.0

Solved Go to solution

Install Apache Phoenix on CHD5.7.0

New Contributor

Hello experts!

 

I try to use Apache Phoenix on my Docker-based Cloudera Quickstart, for training purposes.

 

I found some amazing blog about this : http://crazyadmins.com/install-and-configure-apache-phoenix-on-cloudera-hadoop-cdh5/

 

However it doesn't work at me...

 

My current Hbase version is 1.2.0. So if I refer to Apache Phoenix website, I can get the 4.14 version but not a too recent one.

 

Here's what I did :

1. wget http://archive.apache.org/dist/phoenix/apache-phoenix-4.14.0-cdh5.13.2/bin/apache-phoenix-4.14.0-cdh...
2. tar -xvf apache-phoenix-4.14.0-cdh5.13.2-bin.tar.gz
3. I copy all the content of this extracted tar.gz into my /usr/lib/hbase/lib/ directory (using : cp -a /phoenix/apache-phoenix-4.14.0-cdh5.13.2-bin/. /usr/lib/hbase/lib/apache-phoenix)
4. I restart HBase (to be sure, I stop my docker container and I start it again)
5. I should be able to use that kind of command : ./sqlline.py localhost
But it doesn't work. I received the following error message :

Error: org.apache.hadoop.hbase.DoNotRetryIOException: Unable to load configured region split policy 'org.apache.phoenix.schema.MetaDataSplitPolicy' for table 'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks
        at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1707)
        at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1568)
        at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1497)
        at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:468)
        at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55682)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
        at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
        at java.lang.Thread.run(Thread.java:745) (state=08000,code=101)
org.apache.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.DoNotRetryIOException: Unable to load configured region split policy 'org.apache.phoenix.schema.MetaDataSplitPolicy' for table 'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks
        at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1707)
        at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1568)
        at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1497)
        at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:468)
        at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55682)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
        at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
        at java.lang.Thread.run(Thread.java:745)

        at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:144)
        at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1197)
        at org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1491)
        at org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2717)
        at org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:1114)
        at org.apache.phoenix.compile.CreateTableCompiler$1.execute(CreateTableCompiler.java:192)
        at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)
        at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
        at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
        at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:389)
        at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
        at org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1806)
        at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2528)
        at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2491)
        at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
        at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2491)
        at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
        at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
        at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
        at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
        at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
        at sqlline.Commands.connect(Commands.java:1064)
        at sqlline.Commands.connect(Commands.java:996)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
        at sqlline.SqlLine.dispatch(SqlLine.java:809)
        at sqlline.SqlLine.initArgs(SqlLine.java:588)
        at sqlline.SqlLine.begin(SqlLine.java:661)
        at sqlline.SqlLine.start(SqlLine.java:398)
        at sqlline.SqlLine.main(SqlLine.java:291)
Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: Unable to load configured region split policy 'org.apache.phoenix.schema.MetaDataSplitPolicy' for table 'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks


Could you please help?

Best regards,

1 ACCEPTED SOLUTION

Accepted Solutions
Highlighted

Re: Install Apache Phoenix on CHD5.7.0

Guru
@vincent2,

Apache Phoenix is only available to CDH from 5.16.x onwards, which was mentioned here:
https://blog.cloudera.com/apache-phoenix-for-cdh/

It will not work for CDH5.7.0. Please upgrade your CDH first if you want to use it.

Cheers
Eric

View solution in original post

2 REPLIES 2
Highlighted

Re: Install Apache Phoenix on CHD5.7.0

Guru
@vincent2,

Apache Phoenix is only available to CDH from 5.16.x onwards, which was mentioned here:
https://blog.cloudera.com/apache-phoenix-for-cdh/

It will not work for CDH5.7.0. Please upgrade your CDH first if you want to use it.

Cheers
Eric

View solution in original post

Highlighted

Re: Install Apache Phoenix on CHD5.7.0

New Contributor

Great @EricL !! Thanks a lot for your advice.

 

I'm going to update my CDH and try again

 

 

Don't have an account?
Coming from Hortonworks? Activate your account here