Support Questions
Find answers, ask questions, and share your expertise

Exception connecting to a hbase serverr with pheonix 4.7 installed, isNamespaceMappingEnabled problem

Hi:

While I tried to connect to a hbase server with phoenix 4.7 installed using client with the same version,

I got the following exception:

java.sql.SQLException: ERROR 726 (43M10): Inconsistent namespace mapping properites

Cannot initiate connection as SYSTEM:CATALOG is found but client does not have phoenix.schema.isNamespaceMappingEnabled enabled

I checked the server and client, both sides are having the following options set to true in the hbase-site config:

phoenix.schema.isNamespaceMappingEnabled,

phoenix.schema.mapSystemTablesToNamespace

The attached are the traces and the config screenshots for HBase client and server . The platform is HDP 2.5 on both sides.

Trace1 is when using Pheonix jdbc driver directly, trace2 is using it through Spark

Any idea what I should do?

Thanks

Xindian

1 ACCEPTED SOLUTION

For trace 1 (if you are using sqlline.py).

can you check your <PHOENIX_HOME>/bin directory and remove if there is any hbase-site.xml and try.

If you are using any java program, you need to ensure hbase-site.xml in classpath or you are adding these properties while creating connection.

For trace 2 (spark job)

You need to include hbase-site.xml in the classpath of spark like this:-

you can add hbase-site.xml in spark conf directory of all nodes or add properties needed in spark-defaults.conf. 

OR(try)


spark.driver.extraClassPath= /usr/hdp/current/phoenix-client/phoenix-client-spark.jar:/etc/hbase/conf/hbase-site.xml 
spark.executor.extraClassPath= /usr/hdp/current/phoenix-client/phoenix-client-spark.jar:/etc/hbase/conf/hbase-site.xml

View solution in original post

12 REPLIES 12

Super Collaborator

You need to be sure that phoenix client has hbase-site.xml in the classpath. You may do this by setting HBASE_CONF_DIR environment variable.

@ssoldatov

Hi, ssoldatov:

Thanks for the suggestions, but it still does not work, with the same error.

This is what I did to submit the spark job:

#!/usr/bin/bash
CWD="$(pwd)"
SPARK_CONF_DIR=${CWD}/output/conf
HBASE_CONF_DIR=/usr/hdp/2.5.0.0-1245/hbase/conf
export SPARK_CONF_DIR
export CWD
export HBASE_CONF_DIR
spark-submit \
  --driver-class-path "${CWD}/target/uber-spark-boot-0.1.0.jar:/usr/hdp/2.5.0.0-1245/phoenix/phoenix-client.jar" \
  --class "com.nms.Application" --master local[3]  ${CWD}/target/uber-spark-boot-0.1.0.jar

This is what I printed out at run time in the main procedure in the spark driver, /usr/hdp/2.5.0.0-1245/hbase/conf is where the corrrect hbase-site.xml locates

Let's inspect the classpath:
16/09/15 17:57:30 INFO Application: /home/vagrant/nms/spark-boot/target/uber-spark-boot-0.1.0.jar
16/09/15 17:57:30 INFO Application: /usr/hdp/2.5.0.0-1245/phoenix/phoenix-4.7.0.2.5.0.0-1245-client.jar
16/09/15 17:57:30 INFO Application: /home/vagrant/nms/spark-boot/output/conf/
16/09/15 17:57:30 INFO Application: /usr/hdp/2.5.0.0-1245/spark/lib/spark-assembly-1.6.2.2.5.0.0-1245-hadoop2.7.3.2.5.0.0-1245.jar
16/09/15 17:57:30 INFO Application: /usr/hdp/2.5.0.0-1245/spark/lib/datanucleus-api-jdo-3.2.6.jar
16/09/15 17:57:30 INFO Application: /usr/hdp/2.5.0.0-1245/spark/lib/datanucleus-core-3.2.10.jar
16/09/15 17:57:30 INFO Application: /usr/hdp/2.5.0.0-1245/spark/lib/datanucleus-rdbms-3.2.9.jar
16/09/15 17:57:30 INFO Application: /etc/hadoop/2.5.0.0-1245/0/
16/09/15 17:57:30 INFO Application: /usr/hdp/2.5.0.0-1245/hadoop/lib/aws-java-sdk-s3-1.10.6.jar
16/09/15 17:57:30 INFO Application: /usr/hdp/2.5.0.0-1245/hadoop/lib/aws-java-sdk-core-1.10.6.jar
16/09/15 17:57:30 INFO Application: /usr/hdp/2.5.0.0-1245/hadoop/lib/aws-java-sdk-kms-1.10.6.jar
Let's inspect the environment variables:
16/09/15 17:57:30 INFO Application: Env Var Name : CWD Value : /home/vagrant/nms/spark-boot
16/09/15 17:57:30 INFO Application: Env Var Name : DBUS_SESSION_BUS_ADDRESS Value : unix:abstract=/tmp/dbus-qWsc4sL7En,guid=ee8eaf05797434b6bd4fffdf57dab404
16/09/15 17:57:30 INFO Application: Env Var Name : DESKTOP_SESSION Value : gnome-classic
16/09/15 17:57:30 INFO Application: Env Var Name : DISPLAY Value : :0
16/09/15 17:57:30 INFO Application: Env Var Name : GDMSESSION Value : gnome-classic
16/09/15 17:57:30 INFO Application: Env Var Name : GDM_LANG Value : en_US.utf8
16/09/15 17:57:30 INFO Application: Env Var Name : GEM_HOME Value : /usr/local/rvm/gems/ruby-2.2.0
16/09/15 17:57:30 INFO Application: Env Var Name : GEM_PATH Value : /usr/local/rvm/gems/ruby-2.2.0:/usr/local/rvm/gems/ruby-2.2.0@global
16/09/15 17:57:30 INFO Application: Env Var Name : GJS_DEBUG_OUTPUT Value : stderr
16/09/15 17:57:30 INFO Application: Env Var Name : GJS_DEBUG_TOPICS Value : JS ERROR;JS LOG
16/09/15 17:57:30 INFO Application: Env Var Name : GNOME_DESKTOP_SESSION_ID Value : this-is-deprecated
16/09/15 17:57:30 INFO Application: Env Var Name : GNOME_SHELL_SESSION_MODE Value : classic
16/09/15 17:57:30 INFO Application: Env Var Name : GPG_AGENT_INFO Value : /run/user/1000/keyring/gpg:0:1
16/09/15 17:57:30 INFO Application: Env Var Name : HADOOP_CONF_DIR Value : /usr/hdp/current/hadoop-client/conf
16/09/15 17:57:30 INFO Application: Env Var Name : HADOOP_HOME Value : /usr/hdp/current/hadoop-client
16/09/15 17:57:30 INFO Application: Env Var Name : HBASE_CONF_DIR Value : /usr/hdp/2.5.0.0-1245/hbase/conf
16/09/15 17:57:30 INFO Application: Env Var Name : HDP_VERSION Value : 2.5.0.0-1245
16/09/15 17:57:30 INFO Application: Env Var Name : HISTCONTROL Value : ignoredups 

16/09/15 17:57:30 INFO Application: Env Var Name : HISTSIZE Value : 1000

Explorer

Can we see what the Phoenix System tables look like in HBase Shell?

Run 'hbase shell' then 'list' and post the output.

Example:

[root@sandbox ~]# hbase shell

HBase Shell; enter 'help<RETURN>' for list of supported commands.

Type "exit<RETURN>" to leave the HBase Shell Version 1.1.2.2.5.0.0-1245, r53538b8ab6749cbb6fdc0fe448b89aa82495fb3f, Fri Aug 26 01:32:27 UTC 2016

hbase(main):001:0> list

TABLE

SYSTEM:CATALOG

SYSTEM:FUNCTION

SYSTEM:SEQUENCE

SYSTEM:STATS

3 row(s) in 0.2090 seconds => ["SYSTEM:CATALOG", "SYSTEM:FUNCTION", "SYSTEM:SEQUENCE", "SYSTEM:STATS"]

Explorer

If they instead look like 'SYSTEM.CATALOG' then they are still the original way.

If instead your Phoenix System tables look like 'SYSTEM:CATALOG' they were migrated to the new namespace mechanism already. I think you'll likely have to add 'phoenix.schema.isNamespaceMappingEnabled' to your hbase-site.xml, I dont see a way to go back.

it looks like this. The problem gets solved when I copy the hbase-site.xml into the spark conf directory

Thanks

Shindian

1.8.7-p357 :002 >   list
TABLE
DATA_SCHEMA
PROCESS_LOG
SCHEMA_VERSION
SYSTEM.CATALOG
SYSTEM:CATALOG
SYSTEM:FUNCTION
SYSTEM:SEQUENCE
SYSTEM:STATS
TENANT
WEB_STAT
10 row(s) in 0.4010 seconds


 => ["DATA_SCHEMA", "PROCESS_LOG", "SCHEMA_VERSION", "SYSTEM.CATALOG", "SYSTEM:CATALOG", "SYSTEM:FUNCTION", "SYSTEM:SEQUENCE", "SYSTEM:STATS", "TENANT", "WEB_STAT"]



For trace 1 (if you are using sqlline.py).

can you check your <PHOENIX_HOME>/bin directory and remove if there is any hbase-site.xml and try.

If you are using any java program, you need to ensure hbase-site.xml in classpath or you are adding these properties while creating connection.

For trace 2 (spark job)

You need to include hbase-site.xml in the classpath of spark like this:-

you can add hbase-site.xml in spark conf directory of all nodes or add properties needed in spark-defaults.conf. 

OR(try)


spark.driver.extraClassPath= /usr/hdp/current/phoenix-client/phoenix-client-spark.jar:/etc/hbase/conf/hbase-site.xml 
spark.executor.extraClassPath= /usr/hdp/current/phoenix-client/phoenix-client-spark.jar:/etc/hbase/conf/hbase-site.xml

As an improvement, we could get namespace mapping properties from server at the client so that every client doesn't need to specify them, have raised the jira for the same

https://issues.apache.org/jira/browse/PHOENIX-3288

adding hbase-site.xml in spark conf directory solves the problem

Thanks

Shindian

New Contributor

This was the solution. Thank you!

New Contributor

Hi all,

If you are using the DBCPConnectionPool in NiFi to connect to Phoenix and you are having the same issue, you must create a symlink of hbase-site.xml in the nar folder of the DBCP service.

ln -s /usr/hdp/current/hbase-master/conf/hbase-site.xml
/usr/nifi/work/nar/extensions/nifi-dbcp-service-nar-1.1.0.2.1.1.0-2.nar-unpacked/META-INF/bundled-dependencies/

Thanks @Gabriela Martinez for sharing, would you mind creating a separate question by tagging Phoenix and NIFI and answering and accepting the same, as it will benefit the other users who are using NIFI with Phoenix.

@X Long

I was facing same kind of issue. I have resolve this issue by using following steps:-

1) Edit Ambari->Hive->Configs->Advanced->Custom hive-site->Add Property..., add the following properties based on your HBase configurations(you can search in Ambari->HBase->Configs): custom hive-site.xml

hbase.zookeeper.quorum=xyz (find this property value from hbase )

zookeeper.znode.parent=/hbase-unsecure (find this property value from hbase )

phoenix.schema.mapSystemTablesToNamespace=true

phoenix.schema.isNamespaceMappingEnabled=true

2) Copy jar to /usr/hdp/current/hive-server2/auxlib from

/usr/hdp/2.5.6.0-40/phoenix/phoenix-4.7.0.2.5.6.0-40-hive.jar

/usr/hdp/2.5.6.0-40/phoenix/phoenix-hive-4.7.0.2.5.6.0-40-sources.jar If he jar is not working for you then just try to get following jar phoenix-hive-4.7.0.2.5.3.0-37.jar and copy this to /usr/hdp/current/hive-server2/auxlib

3) add property to custom-hive-env

HIVE_AUX_JARS_PATH=/usr/hdp/current/hive-server2/auxlib/4) Add follwoing property to custom-hbase-site.xmlphoenix.schema.mapSystemTablesToNamespace=true phoenix.schema.isNamespaceMappingEnabled=true

5) Also run following command

1) jar uf /usr/hdp/current/hive-server2/auxlib/phoenix-4.7.0.2.5.6.0-40-client.jar /etc/hive/conf/hive-site.xml

2) jar uf /usr/hdp/current/hive-server2/auxlib/phoenix-4.7.0.2.5.6.0-40-client.jar /etc/hbase/conf/hbase-site.xml

And I hope my solution will work for you 🙂

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.