Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Process for forcing a failover for Hive service

SOLVED Go to solution
Highlighted

Process for forcing a failover for Hive service

New Contributor

Is there a documented process for forcing a failover for the Hive Service in an HDP cluster with HA enabled ? In particular I am looking to see if this process can be automated - so is there is a command that can be invoked by CLI and/or API?

1 ACCEPTED SOLUTION

Accepted Solutions

Re: Process for forcing a failover for Hive service

Master Collaborator

Hive has two services - Hive Metastore and HiveServer2.

Multiple Hive Metastore processes can be started and specified in the hive.metastore.uris. I believe the clients pick up the first active one from the list.

Multiple HiveServer2 instances can be restarted and they register with the Zookeeper. The client will randomly pick one and connect to it.

In either case once a client session has been initiated, killing/shutting down will invalidate the client session needing the client to reconnect. For HiveServer2 however you can gracefully shut an instance using the deregister command. When a HiveServer2 instance is de-registered, it is removed from the list of servers available for new client connections. (Client sessions on the server are not affected). When the last client session on a server is closed, the server is closed.

3 REPLIES 3

Re: Process for forcing a failover for Hive service

@vnair@hortonworks.com

If an HS2 instance failed while a client is connected, the session is lost. Since this situation need to be handed at the client, there is no automatic failover; the client needs to reconnect using ZooKeeper.

http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_hadoop-ha/content/ha-hs2-requests.html

Re: Process for forcing a failover for Hive service

Master Collaborator

Hive has two services - Hive Metastore and HiveServer2.

Multiple Hive Metastore processes can be started and specified in the hive.metastore.uris. I believe the clients pick up the first active one from the list.

Multiple HiveServer2 instances can be restarted and they register with the Zookeeper. The client will randomly pick one and connect to it.

In either case once a client session has been initiated, killing/shutting down will invalidate the client session needing the client to reconnect. For HiveServer2 however you can gracefully shut an instance using the deregister command. When a HiveServer2 instance is de-registered, it is removed from the list of servers available for new client connections. (Client sessions on the server are not affected). When the last client session on a server is closed, the server is closed.

Re: Process for forcing a failover for Hive service

New Contributor

In our setup, we run queries with a client that connects to HS2.

For deployment, if I deregister HS2 from ZK then it starts shutdown of the JVM, and shuts the port for checking query progress, leading for the client to retry.

Is there any way, I can do deployment without impacting jobs. e.g. deregister from ZK and all running queries are running and connection are maintained. Stop accepting a new request. So when the query is finished I can deploy.