Support Questions

Find answers, ask questions, and share your expertise

Failure to use Hive StorageHandler in Phoenix

avatar
Contributor

I'm trying to use the Hive StorageHandler for Pheonix, but getting this error:

Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:No suitable driver found for jdbc:phoenix:ip-10-113-98-203.ec2.internal:2181:/hbase-unsecure;) (state=08S01,code=1)

I know that Hive and Phoenix are working on the cluster (which is HDP2.5).

This test shows that DNS is working, a valid Zookeeper server, and the ZNode for Hbase

zookeeper-client -server ip-10-113-98-203.ec2.internal:2181 ls / | tail -1

[registry, hiveserver2, zookeeper, hbase-unsecure, rmstore]

I add the jars in the hive script like this:

-- hadoop fs -copyFromLocal /usr/hdp/current/phoenix-client/phoenix-hive.jar /user/etl/hive_libs

ADD JAR hdfs:///user/etl/hive_libs/phoenix-hive.jar;

ADD JAR hdfs:///user/etl/hive_libs/phoenix-client.jar;

Using this example (https://phoenix.apache.org/hive_storage_handler.html) and replacing the the hostname and ZNode per above gives the command that fails.

createtablephoenix_table (

s1 string,

i1 int,

f1 float,

d1 double

)

STORED BY'org.apache.phoenix.hive.PhoenixStorageHandler'

TBLPROPERTIES (

"phoenix.table.name" = "phoenix_table",

"phoenix.zookeeper.quorum" = "ip-10-113-98-203.ec2.internal",

"phoenix.zookeeper.znode.parent" = "/hbase-unsecure",

"phoenix.zookeeper.client.port" = "2181",

"phoenix.rowkeys" = "s1, i1",

"phoenix.column.mapping" = "s1:s1, i1:i1, f1:f1, d1:d1",

"phoenix.table.options" = "SALT_BUCKETS=10, DATA_BLOCK_ENCODING='DIFF'"

);

Does anyone who uses the Hive StorageHandler have an idea about this?

1 ACCEPTED SOLUTION

avatar
Contributor

I found the problem.... If I run the exact same commands with the "real" hive CLI command, it works. If I use beeline, it fails. This is troubling because "add jars" was done using an HDFS path, so the environment should be the same.

Worse, the hive CLI is headed for removal if/when beeline can take over. Let's hope that doesn't happen too soon.

View solution in original post

5 REPLIES 5

avatar
Contributor

I found the problem.... If I run the exact same commands with the "real" hive CLI command, it works. If I use beeline, it fails. This is troubling because "add jars" was done using an HDFS path, so the environment should be the same.

Worse, the hive CLI is headed for removal if/when beeline can take over. Let's hope that doesn't happen too soon.

avatar
Contributor

If anyone has experience with the HiveStorage handler and has an opinion on whether it works with beeline, I'd appreciate your thoughts.

avatar
Contributor

Follow up (for posterity): beeline does work with the storage handlers. I found that the changes I made to the hive aux path were being undone.

To fix this, I installed the phoenix-hive.jar on all nodes (workers and client/edge nodes) add added the following to "hive-env template" in Ambari (which, bounces HiveServer2 to make it take effect)

export HIVE_AUX_JARS_PATH=$HIVE_AUX_JARS_PATH:/opt/hive/auxpath/phoenix-hive.jar

avatar
Contributor

Also, for posterity... to make this easier to manage, when using the built-in Phoenix, one can simply use the path to the phoenix-hive jar installed by HDP. So the path becomes

export HIVE_AUX_JARS_PATH=$HIVE_AUX_JARS_PATH:/usr/hdp/current/phoenix-client/phoenix-hive.jar

This is better, when using HDP/Ambari managed Phoenix since you won't need install new jars, or change this config if/when you upgrade Phoenix. Just bounce HiveServer2 as needed...

avatar

@phillip young

I was facing same kind of issue. I have resolve this issue by using following steps:-

1) Edit Ambari->Hive->Configs->Advanced->Custom hive-site->Add Property..., add the following properties based on your HBase configurations(you can search in Ambari->HBase->Configs): custom hive-site.xml

hbase.zookeeper.quorum=xyz (find this property value from hbase )

zookeeper.znode.parent=/hbase-unsecure (find this property value from hbase )

phoenix.schema.mapSystemTablesToNamespace=true

phoenix.schema.isNamespaceMappingEnabled=true

2) Copy jar to /usr/hdp/current/hive-server2/auxlib from

/usr/hdp/2.5.6.0-40/phoenix/phoenix-4.7.0.2.5.6.0-40-hive.jar

/usr/hdp/2.5.6.0-40/phoenix/phoenix-hive-4.7.0.2.5.6.0-40-sources.jar If he jar is not working for you then just try to get following jar phoenix-hive-4.7.0.2.5.3.0-37.jar and copy this to /usr/hdp/current/hive-server2/auxlib

3) add property to custom-hive-env

HIVE_AUX_JARS_PATH=/usr/hdp/current/hive-server2/auxlib/4) Add follwoing property to custom-hbase-site.xmlphoenix.schema.mapSystemTablesToNamespace=true phoenix.schema.isNamespaceMappingEnabled=true

5) Also run following command

1) jar uf /usr/hdp/current/hive-server2/auxlib/phoenix-4.7.0.2.5.6.0-40-client.jar /etc/hive/conf/hive-site.xml

2) jar uf /usr/hdp/current/hive-server2/auxlib/phoenix-4.7.0.2.5.6.0-40-client.jar /etc/hbase/conf/hbase-site.xml

And I hope my solution will work for you 🙂