Support Questions
Find answers, ask questions, and share your expertise

Hive HA (JDBC)

Explorer

Hi all!

I appreciate every help, thank you!

I'm using Hive HA via zookeeper, this is my hive-site.xml (HA part):

<property>
  <name>hive.server2.support.dynamic.service.discovery</name>
  <value>true</value>
</property>

<property>
  <name>hive.server2.zookeeper.namespace</name>
  <value>hiveserver2</value>
</property>

 

<property>
  <name>hive.zookeeper.quorum</name>
  <value>hadoopcluster01:2181,hadoopcluster02:2181</value>
</property>

<property>
  <name>hive.zookeeper.client.port</name>
  <value>2181</value>
</property>

 

When I'm using beeline:

 !connect jdbc:hive2://hadoopcluster01:2181,hadoopcluster02:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2

It's perfectly works!

 

When I'm using cloudera jdbc (v: 2.5.15) connect to a single hiveserver2:
Class.forName("com.cloudera.hive.jdbc41.HS2Driver");

DriverManager.getConnection("jdbc:hive2://hadoopcluster01:10000", prop);

It's perfectly works!

 

But when I try to connect to zookeeper via jdbc:

Class.forName("com.cloudera.hive.jdbc41.HS2Driver");

DriverManager.getConnection("jdbc:hive2://zk=hadoopcluster01:2181,hadoopcluster02:2181/hiveserver2", prop);

not work:

[2015-11-11 09:35:29.426] boot - 5832  INFO [main] --- ZooKeeper: Initiating client connection, connectString=hadoopcluster01:2181 sessionTimeout=3000 watcher=com.cloudera.hive.hivecommon.api.ZookeeperDynamicDiscovery$1@74455848
[2015-11-11 09:35:29.426] boot - 5832  INFO [main-SendThread(hadoopcluster01.nebu.local:2181)] --- ClientCnxn: Opening socket connection to server hadoopcluster01.nebu.local/192.168.99.71:2181. Will not attempt to authenticate using SASL (unknown error)
[2015-11-11 09:35:29.426] boot - 5832  INFO [main] --- ZooKeeper: Initiating client connection, connectString=hadoopcluster02:2181 sessionTimeout=3000 watcher=com.cloudera.hive.hivecommon.api.ZookeeperDynamicDiscovery$1@4e7912d8
[2015-11-11 09:35:29.426] boot - 5832  INFO [main-SendThread(hadoopcluster01.nebu.local:2181)] --- ClientCnxn: Socket connection established to hadoopcluster01.nebu.local/192.168.99.71:2181, initiating session
[2015-11-11 09:35:29.426] boot - 5832  INFO [main-SendThread(hadoopcluster02.nebu.local:2181)] --- ClientCnxn: Opening socket connection to server hadoopcluster02.nebu.local/192.168.99.72:2181. Will not attempt to authenticate using SASL (unknown error)
[2015-11-11 09:35:29.426] boot - 5832  INFO [main-SendThread(hadoopcluster02.nebu.local:2181)] --- ClientCnxn: Socket connection established to hadoopcluster02.nebu.local/192.168.99.72:2181, initiating session
[2015-11-11 09:35:29.426] boot - 5832  INFO [main-SendThread(hadoopcluster01.nebu.local:2181)] --- ClientCnxn: Session establishment complete on server hadoopcluster01.nebu.local/192.168.99.71:2181, sessionid = 0x150f58353570006, negotiated timeout = 6000
[2015-11-11 09:35:29.457] boot - 5832  INFO [main-SendThread(hadoopcluster02.nebu.local:2181)] --- ClientCnxn: Session establishment complete on server hadoopcluster02.nebu.local/192.168.99.72:2181, sessionid = 0x250f582c7990001, negotiated timeout = 6000
[2015-11-11 09:35:29.457] boot - 5832  INFO [main] --- ZooKeeper: Session: 0x250f582c7990001 closed
[2015-11-11 09:35:29.457] boot - 5832  INFO [main-EventThread] --- ClientCnxn: EventThread shut down
java.sql.SQLException: [Cloudera][HiveJDBCDriver](500170) Error occured while setting up Zookeeper Dynamic Discovery: [Cloudera][HiveJDBCDriver]Error Occured Connecting to ZooKeeper: At: hadoopcluster01:2181 With Error Message: Path length must be > 0..
    at com.cloudera.hive.hivecommon.api.ZooKeeperEnabledExtendedHS2Factory.createClient(Unknown Source)
    at com.cloudera.hive.hivecommon.core.HiveJDBCCommonConnection.connect(Unknown Source)
    at com.cloudera.hive.hive.core.HiveJDBCConnection.connect(Unknown Source)
    at com.cloudera.hive.jdbc.common.BaseConnectionFactory.doConnect(Unknown Source)
    at com.cloudera.hive.jdbc.common.AbstractDriver.connect(Unknown Source)
    at java.sql.DriverManager.getConnection(DriverManager.java:664)
    at java.sql.DriverManager.getConnection(DriverManager.java:208)
    at com.nebu.nldpconnection.main.NLDPConnectionTest.getConnection(NLDPConnectionTest.java:81)
    at com.nebu.nldpconnection.main.NLDPConnectionTest.run(NLDPConnectionTest.java:214)
    at org.springframework.boot.SpringApplication.runCommandLineRunners(SpringApplication.java:677)
    at org.springframework.boot.SpringApplication.afterRefresh(SpringApplication.java:695)
    at org.springframework.boot.SpringApplication.run(SpringApplication.java:321)
    at org.springframework.boot.SpringApplication.run(SpringApplication.java:952)
    at org.springframework.boot.SpringApplication.run(SpringApplication.java:941)
Caused by: com.cloudera.hive.support.exceptions.GeneralException: [Cloudera][HiveJDBCDriver](500170) Error occured while setting up Zookeeper Dynamic Discovery: [Cloudera][HiveJDBCDriver]Error Occured Connecting to ZooKeeper: At: hadoopcluster01:2181 With Error Message: Path length must be > 0..
    ... 14 more

1 ACCEPTED SOLUTION

Accepted Solutions

Explorer

Thank you!

According the cloudera jdbc documentation this is the right format:
jdbc:hive2://zk=hadoopcluster01:2181,hadoopcluster02:2181/hiveserver2

both of course I tried both way... unfortunatelly it was not work.

So I changed it to the original hive jdbc:

<dependency>
            <groupId>org.apache.hive</groupId>
            <artifactId>hive-jdbc</artifactId>
            <version>1.2.1</version>
</dependency>

 

and it looks good now! 😉

View solution in original post

2 REPLIES 2

Contributor

I wonder if you are missing some parameters.. you need to add serviceDiscoveryMode and zookeeperNamespace to the JDBC URL.

 

The format is like:

 

jdbc:hive2://{quorum};serviceDiscoveryMode=zooKeeper;zooKeeperNamespace={namespace}

 

so something like:

 

jdbc:hive2://hadoopcluster01:2181,hadoopcluster02:2181;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2

Explorer

Thank you!

According the cloudera jdbc documentation this is the right format:
jdbc:hive2://zk=hadoopcluster01:2181,hadoopcluster02:2181/hiveserver2

both of course I tried both way... unfortunatelly it was not work.

So I changed it to the original hive jdbc:

<dependency>
            <groupId>org.apache.hive</groupId>
            <artifactId>hive-jdbc</artifactId>
            <version>1.2.1</version>
</dependency>

 

and it looks good now! 😉

View solution in original post