Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Process for forcing a failover for Hive service

avatar
Contributor

Is there a documented process for forcing a failover for the Hive Service in an HDP cluster with HA enabled ? In particular I am looking to see if this process can be automated - so is there is a command that can be invoked by CLI and/or API?

1 ACCEPTED SOLUTION

avatar

Hive has two services - Hive Metastore and HiveServer2.

Multiple Hive Metastore processes can be started and specified in the hive.metastore.uris. I believe the clients pick up the first active one from the list.

Multiple HiveServer2 instances can be restarted and they register with the Zookeeper. The client will randomly pick one and connect to it.

In either case once a client session has been initiated, killing/shutting down will invalidate the client session needing the client to reconnect. For HiveServer2 however you can gracefully shut an instance using the deregister command. When a HiveServer2 instance is de-registered, it is removed from the list of servers available for new client connections. (Client sessions on the server are not affected). When the last client session on a server is closed, the server is closed.

View solution in original post

3 REPLIES 3

avatar
Master Mentor

@vnair@hortonworks.com

If an HS2 instance failed while a client is connected, the session is lost. Since this situation need to be handed at the client, there is no automatic failover; the client needs to reconnect using ZooKeeper.

http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_hadoop-ha/content/ha-hs2-requests.html

avatar

Hive has two services - Hive Metastore and HiveServer2.

Multiple Hive Metastore processes can be started and specified in the hive.metastore.uris. I believe the clients pick up the first active one from the list.

Multiple HiveServer2 instances can be restarted and they register with the Zookeeper. The client will randomly pick one and connect to it.

In either case once a client session has been initiated, killing/shutting down will invalidate the client session needing the client to reconnect. For HiveServer2 however you can gracefully shut an instance using the deregister command. When a HiveServer2 instance is de-registered, it is removed from the list of servers available for new client connections. (Client sessions on the server are not affected). When the last client session on a server is closed, the server is closed.

avatar
New Contributor

In our setup, we run queries with a client that connects to HS2.

For deployment, if I deregister HS2 from ZK then it starts shutdown of the JVM, and shuts the port for checking query progress, leading for the client to retry.

Is there any way, I can do deployment without impacting jobs. e.g. deregister from ZK and all running queries are running and connection are maintained. Stop accepting a new request. So when the query is finished I can deploy.