Support Questions

Find answers, ask questions, and share your expertise

Namenode HA Switchover

avatar
Explorer

Hello ,

 

In our cluster Namenode switchover is happening everyday . Can anyone please help me in knowing which logs to check for the root cause of switchover which is happening again and again.

 

@namenode 

 

 

7 REPLIES 7

avatar
Explorer

Issue is happening in Hortonworks platteform.

avatar
Master Mentor

@Vinay1991 

Unfortunately, you haven't described your cluster setup but my assumption is that you have 3 Zk's in your HA implementation. There are two components deployed to Hadoop HDFS for implementing Automatic Failover.

 

These two components are-

  • ZKFailoverController process(ZKFC)
  • ZooKeeper quorum (3 Zk's)

 

1. ZKFailoverController(ZKFC)
The ZKFC is the ZooKeeper client, who is also responsible for managing and monitoring the NameNode state. ZKFC is a client that runs on all nodes on the Hadoop cluster, which is running NameNode.

 

These 2 components are responsible for:

Health monitoring
ZKFC is accountable for health monitoring heart beating the NameNode with health-check commands periodically. As long as the NameNode responds with a healthy status timely, it considers the NameNode as healthy. In this case, if the NameNode got crashed, froze, or entered an unhealthy state, then it marks the NameNode as unhealthy.

ZooKeeper session management
It is also responsible for the session management with ZooKeeper. The ZKFC maintains a session open in the ZooKeeper when the local Namenode is healthy. Also, if the Local NameNode is the active NameNode, then with the session, it also holds a special lock “znode”. This lock uses ZooKeeper support for the ”ephemeral” nodes. Thus, if the session gets expires, the lock node will be deleted automatically.

ZooKeeper-based election

When the local Namenode is healthy and ZKFC finds that no other NameNode acquires the lock “znode”, then it will try by itself to acquire the lock. If it gets successful in obtaining the lock, then ZKFC has won the election, and now it is responsible for running the failover to make its local NameNode active. The failover process run by the ZKFC is similar to the failover process run by the manual failover described in the NameNode High Availability article.

2. ZooKeeper quorum
A ZK quorum is a highly available service for maintaining little amounts of coordination data. It notifies the clients about the changes in that data. It monitors clients for the failures.
The HDFS implementation of automatic failover depends on ZooKeeper for the following things:

How does it detect NN Failure each NameNode machine in the Hadoop cluster maintains a persistent session in the ZooKeeper. If any of the machines crashes, then the ZooKeeper session maintained will get expire—zooKeeper than reveal to all the other NameNodes to start the failover process.

Shelton_0-1627415301421.png

 

To exclusively select the active NameNode, ZooKeeper provides a simple mechanism. In the case of active NameNode failure, another standby NameNode may take the special exclusive lock in the ZooKeeper, stating that it should become the next active NameNode.

 

  • After the initialization of Health Monitor is completed, internal threads are started to call the method corresponding to the HASERVICE Protocol RPC interface of NameNode periodically to detect the health status of NameNode.
  • If the Health Monitor detects a change in the health status of NameNode, it calls back the corresponding method registered by ZKFailover Controller for processing.
  • If ZKFailover Controller decides that a primary-standby switch is needed, it will first use
  • Active Standby Elector to conduct an automatic primary election.
  • Active Standby Elector interacts with Zookeeper to complete an automatic backup election.
  • Active Standby Elector calls back the corresponding method of ZKFailover Controller to notify the current NameNode to become the main NameNode or the standby NameNode after the primary election is completed.
  • ZKFailover Controller calls the HASERVICE Protocol RPC interface corresponding to NameNode to convert NameNode to Active or Standby state.

Taking all the above into account, the first component logs to check are Zk and NN,

 

/var/log/hadoop/hdfs
/var/log/zookeeper

 


My suspicion is you have issues with the Namenode heartbeat which makes the zookeeper fail to get the pingback in time and marks the NN as dead and elects a new leader and that keeps happening in a loop. So check those ZK logs to ensure time is set correctly and is in sync!

Please revert

avatar
Explorer

Hello @Shelton 

 

Thank you for the elaborative answers . In my cluster we have 2 ZKFC and 3 Zookeeper and 2 NN. 

I have been through the NN and ZKFC logs and i found the below errors , can we conclude something from it ? 

NN error logs -

 

Metrics Collector.png

 

ZKFC logs - 

 

Vinay1991_0-1627463070938.png

 

avatar
Master Mentor

@Vinay1991 

From the logs, I see connectivity loss and that's precisely what's causing the NN switch. Remember in my earlier posting the importance of  Zk quorum!
Your NN and losing Connection to the ZK  so the NN that loses active connection is  causing the ZK to elect a new leader and that's happening in a loop

 

Caused by : java.net.SocketTimeoutException: 5000 millis timeout while waiting for channel to be ready for read

 

I would start by checking FW  I see you are on Ubuntu so ensure the FW is disabled across the Cluster.

Identifying and Fixing Socket Timeouts

The root cause of a Socket Timeout is a connectivity failure between the machines, so try the usual process

  1. Check the settings: is this the machine you really wanted to talk to?
  2. From the machine that is raising the exception, can you resolve the hostname?
  3. Is that resolved hostname the correct one?
  4. Can you ping the remote host?
  5. Is the target machine running the relevant Hadoop processes?
  6. Can you telnet to the target host and port?
  7. Can you telnet to the target host and port from any other machine?
  8. On the target machine, can you telnet to the port using localhost as the hostname? If this works but external network connections time out, it's usually a firewall issue.
  9. If it is a remote object store: is the address correct? Does it go away when you repeat the operation? Does it only happen on bulk operations? If the latter, it's probably due to throttling at the far end.

 

Check your hostname resolution DNS or /etc/hosts should be in sync and another important thing is all your host time should be in sync.

can you share the value of Core-site.xml parameter  ha.zookeeper.quorum

avatar
Explorer

Hello @Shelton 

Thank you for the descriptive response . 

Kindly find the value for the asked property as below -

Core-site.xml parameter  ha.zookeeper.quorum - 

uhn7ttob2qtzk001.prod.rmn.local:2181,uhn7ttob2qtzk002.prod.rmn.local:2181,uhn7ttob2qtzk003.prod.rmn.local:2181

avatar
Master Mentor

@Vinay1991 

The ZK's look okay please go through the list I shared about the connectivity. Please validae one by one.

avatar
Master Mentor

@Vinay1991 
I mentioned the logs below.You will need definitely the ZK ZKFailoverController logs and NameNode logs