This is a Kerberos configuration issue, most likely with the principal for the second NameNode. When a checkpoint is attempted (copying the fsimage file from the Standby NameNode to the Active), the connection is failing due to the GSSAPI authentication with the Kerberos credential.
The failover controller logs will probably contain similar messages.
Since the server is able to start, your basic Kerberos setup is allowing the server to obtain it's initial credential but it appears it is expiring.
A few possible causes:
* The principal needs to have renewable tickets. In your output this is set to false. The problem could be with the /etc/krb5.conf file on the Standby or with the principal in your KDC.
* Reverse DNS lookup for the hostname is not working. The packet sent from one server has the information "my hostname is: server2.example.com, IP: 10.1.2.3". The source does a reverse DNS lookup for 10.1.2.3 and is not receiving a hostname or is receiving a hostname that does not match the one provided.
* You are having an intermittent outage with your KDC or DNS that is causing the above mentioned problems.
Depending upon the type of KDC in use and how it is configured, there may be additional issues. Since you report the rest of the cluster is functional (no loss to the DataNodes), this is most likely isolated to the one NameNode's principal.
I found out that
principal of namenode : hdfs/xyz.munich.com@ABC.com
hostname : xyz.paris.com
hostname --fqdn : xyz.munich.com
So from above 3 values you can see that hostname is not same as principal and FQDN.
But as per my knowledge, only FQDN matters.
Still, do you think incorrect hostname can cause this issue?
Please find a part of krb5.cnf
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
In my case
reinstalled hdfs in CDH.
in SNN machine /hadoop/dfs/snn/current/fsimage_* is different NN /hadoop/dfs/nn/current/fsimage_*
Delete /hadoop/dfs/snn in SNN machine and then restart the SNN.