Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Exception connecting to a hbase serverr with pheonix 4.7 installed, isNamespaceMappingEnabled problem

avatar
Contributor

Hi:

While I tried to connect to a hbase server with phoenix 4.7 installed using client with the same version,

I got the following exception:

java.sql.SQLException: ERROR 726 (43M10): Inconsistent namespace mapping properites

Cannot initiate connection as SYSTEM:CATALOG is found but client does not have phoenix.schema.isNamespaceMappingEnabled enabled

I checked the server and client, both sides are having the following options set to true in the hbase-site config:

phoenix.schema.isNamespaceMappingEnabled,

phoenix.schema.mapSystemTablesToNamespace

The attached are the traces and the config screenshots for HBase client and server . The platform is HDP 2.5 on both sides.

Trace1 is when using Pheonix jdbc driver directly, trace2 is using it through Spark

Any idea what I should do?

Thanks

Xindian

1 ACCEPTED SOLUTION

avatar

For trace 1 (if you are using sqlline.py).

can you check your <PHOENIX_HOME>/bin directory and remove if there is any hbase-site.xml and try.

If you are using any java program, you need to ensure hbase-site.xml in classpath or you are adding these properties while creating connection.

For trace 2 (spark job)

You need to include hbase-site.xml in the classpath of spark like this:-

you can add hbase-site.xml in spark conf directory of all nodes or add properties needed in spark-defaults.conf. 

OR(try)


spark.driver.extraClassPath= /usr/hdp/current/phoenix-client/phoenix-client-spark.jar:/etc/hbase/conf/hbase-site.xml 
spark.executor.extraClassPath= /usr/hdp/current/phoenix-client/phoenix-client-spark.jar:/etc/hbase/conf/hbase-site.xml

View solution in original post

12 REPLIES 12

avatar
Super Collaborator

You need to be sure that phoenix client has hbase-site.xml in the classpath. You may do this by setting HBASE_CONF_DIR environment variable.

avatar
Contributor

@ssoldatov

Hi, ssoldatov:

Thanks for the suggestions, but it still does not work, with the same error.

This is what I did to submit the spark job:

#!/usr/bin/bash
CWD="$(pwd)"
SPARK_CONF_DIR=${CWD}/output/conf
HBASE_CONF_DIR=/usr/hdp/2.5.0.0-1245/hbase/conf
export SPARK_CONF_DIR
export CWD
export HBASE_CONF_DIR
spark-submit \
  --driver-class-path "${CWD}/target/uber-spark-boot-0.1.0.jar:/usr/hdp/2.5.0.0-1245/phoenix/phoenix-client.jar" \
  --class "com.nms.Application" --master local[3]  ${CWD}/target/uber-spark-boot-0.1.0.jar

This is what I printed out at run time in the main procedure in the spark driver, /usr/hdp/2.5.0.0-1245/hbase/conf is where the corrrect hbase-site.xml locates

Let's inspect the classpath:
16/09/15 17:57:30 INFO Application: /home/vagrant/nms/spark-boot/target/uber-spark-boot-0.1.0.jar
16/09/15 17:57:30 INFO Application: /usr/hdp/2.5.0.0-1245/phoenix/phoenix-4.7.0.2.5.0.0-1245-client.jar
16/09/15 17:57:30 INFO Application: /home/vagrant/nms/spark-boot/output/conf/
16/09/15 17:57:30 INFO Application: /usr/hdp/2.5.0.0-1245/spark/lib/spark-assembly-1.6.2.2.5.0.0-1245-hadoop2.7.3.2.5.0.0-1245.jar
16/09/15 17:57:30 INFO Application: /usr/hdp/2.5.0.0-1245/spark/lib/datanucleus-api-jdo-3.2.6.jar
16/09/15 17:57:30 INFO Application: /usr/hdp/2.5.0.0-1245/spark/lib/datanucleus-core-3.2.10.jar
16/09/15 17:57:30 INFO Application: /usr/hdp/2.5.0.0-1245/spark/lib/datanucleus-rdbms-3.2.9.jar
16/09/15 17:57:30 INFO Application: /etc/hadoop/2.5.0.0-1245/0/
16/09/15 17:57:30 INFO Application: /usr/hdp/2.5.0.0-1245/hadoop/lib/aws-java-sdk-s3-1.10.6.jar
16/09/15 17:57:30 INFO Application: /usr/hdp/2.5.0.0-1245/hadoop/lib/aws-java-sdk-core-1.10.6.jar
16/09/15 17:57:30 INFO Application: /usr/hdp/2.5.0.0-1245/hadoop/lib/aws-java-sdk-kms-1.10.6.jar
Let's inspect the environment variables:
16/09/15 17:57:30 INFO Application: Env Var Name : CWD Value : /home/vagrant/nms/spark-boot
16/09/15 17:57:30 INFO Application: Env Var Name : DBUS_SESSION_BUS_ADDRESS Value : unix:abstract=/tmp/dbus-qWsc4sL7En,guid=ee8eaf05797434b6bd4fffdf57dab404
16/09/15 17:57:30 INFO Application: Env Var Name : DESKTOP_SESSION Value : gnome-classic
16/09/15 17:57:30 INFO Application: Env Var Name : DISPLAY Value : :0
16/09/15 17:57:30 INFO Application: Env Var Name : GDMSESSION Value : gnome-classic
16/09/15 17:57:30 INFO Application: Env Var Name : GDM_LANG Value : en_US.utf8
16/09/15 17:57:30 INFO Application: Env Var Name : GEM_HOME Value : /usr/local/rvm/gems/ruby-2.2.0
16/09/15 17:57:30 INFO Application: Env Var Name : GEM_PATH Value : /usr/local/rvm/gems/ruby-2.2.0:/usr/local/rvm/gems/ruby-2.2.0@global
16/09/15 17:57:30 INFO Application: Env Var Name : GJS_DEBUG_OUTPUT Value : stderr
16/09/15 17:57:30 INFO Application: Env Var Name : GJS_DEBUG_TOPICS Value : JS ERROR;JS LOG
16/09/15 17:57:30 INFO Application: Env Var Name : GNOME_DESKTOP_SESSION_ID Value : this-is-deprecated
16/09/15 17:57:30 INFO Application: Env Var Name : GNOME_SHELL_SESSION_MODE Value : classic
16/09/15 17:57:30 INFO Application: Env Var Name : GPG_AGENT_INFO Value : /run/user/1000/keyring/gpg:0:1
16/09/15 17:57:30 INFO Application: Env Var Name : HADOOP_CONF_DIR Value : /usr/hdp/current/hadoop-client/conf
16/09/15 17:57:30 INFO Application: Env Var Name : HADOOP_HOME Value : /usr/hdp/current/hadoop-client
16/09/15 17:57:30 INFO Application: Env Var Name : HBASE_CONF_DIR Value : /usr/hdp/2.5.0.0-1245/hbase/conf
16/09/15 17:57:30 INFO Application: Env Var Name : HDP_VERSION Value : 2.5.0.0-1245
16/09/15 17:57:30 INFO Application: Env Var Name : HISTCONTROL Value : ignoredups 

16/09/15 17:57:30 INFO Application: Env Var Name : HISTSIZE Value : 1000

avatar
Contributor

Can we see what the Phoenix System tables look like in HBase Shell?

Run 'hbase shell' then 'list' and post the output.

Example:

[root@sandbox ~]# hbase shell

HBase Shell; enter 'help<RETURN>' for list of supported commands.

Type "exit<RETURN>" to leave the HBase Shell Version 1.1.2.2.5.0.0-1245, r53538b8ab6749cbb6fdc0fe448b89aa82495fb3f, Fri Aug 26 01:32:27 UTC 2016

hbase(main):001:0> list

TABLE

SYSTEM:CATALOG

SYSTEM:FUNCTION

SYSTEM:SEQUENCE

SYSTEM:STATS

3 row(s) in 0.2090 seconds => ["SYSTEM:CATALOG", "SYSTEM:FUNCTION", "SYSTEM:SEQUENCE", "SYSTEM:STATS"]

avatar
Contributor

If they instead look like 'SYSTEM.CATALOG' then they are still the original way.

If instead your Phoenix System tables look like 'SYSTEM:CATALOG' they were migrated to the new namespace mechanism already. I think you'll likely have to add 'phoenix.schema.isNamespaceMappingEnabled' to your hbase-site.xml, I dont see a way to go back.

avatar
Contributor

it looks like this. The problem gets solved when I copy the hbase-site.xml into the spark conf directory

Thanks

Shindian

1.8.7-p357 :002 >   list
TABLE
DATA_SCHEMA
PROCESS_LOG
SCHEMA_VERSION
SYSTEM.CATALOG
SYSTEM:CATALOG
SYSTEM:FUNCTION
SYSTEM:SEQUENCE
SYSTEM:STATS
TENANT
WEB_STAT
10 row(s) in 0.4010 seconds


 => ["DATA_SCHEMA", "PROCESS_LOG", "SCHEMA_VERSION", "SYSTEM.CATALOG", "SYSTEM:CATALOG", "SYSTEM:FUNCTION", "SYSTEM:SEQUENCE", "SYSTEM:STATS", "TENANT", "WEB_STAT"]



avatar

For trace 1 (if you are using sqlline.py).

can you check your <PHOENIX_HOME>/bin directory and remove if there is any hbase-site.xml and try.

If you are using any java program, you need to ensure hbase-site.xml in classpath or you are adding these properties while creating connection.

For trace 2 (spark job)

You need to include hbase-site.xml in the classpath of spark like this:-

you can add hbase-site.xml in spark conf directory of all nodes or add properties needed in spark-defaults.conf. 

OR(try)


spark.driver.extraClassPath= /usr/hdp/current/phoenix-client/phoenix-client-spark.jar:/etc/hbase/conf/hbase-site.xml 
spark.executor.extraClassPath= /usr/hdp/current/phoenix-client/phoenix-client-spark.jar:/etc/hbase/conf/hbase-site.xml

avatar

As an improvement, we could get namespace mapping properties from server at the client so that every client doesn't need to specify them, have raised the jira for the same

https://issues.apache.org/jira/browse/PHOENIX-3288

avatar
Contributor

adding hbase-site.xml in spark conf directory solves the problem

Thanks

Shindian

avatar
Explorer

This was the solution. Thank you!