Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Unable to restrat standby Namenode

avatar
Expert Contributor

Both Namenode are crashed (Active & Standby). I restarted the Active and it is serving. But we are unable to restart the standby NN. I tried to manually restart it but still it is failed. How do I recover and restart the standby Namenode.

Version: HDP 2.2

2016-05-20 18:53:57,954 INFO namenode.EditLogInputStream (RedundantEditLogInputStream.java:nextOp(176)) - Fast-forwarding stream 'http://usw2stdpma01.glassdoor.local:8480/getJournal?jid=dfs-nameservices&segmentTxId=14726901&storageInfo=-60%3A761966699%3A0%3ACID-d16e0895-7c12-404e-9223-952d1b19ace0' to transaction ID 13013207
2016-05-20 18:53:58,216 WARN namenode.FSNamesystem (FSNamesystem.java:loadFromDisk(750)) - Encountered exception loading fsimage
java.io.IOException: There appears to be a gap in the edit log. We expected txid 13013207, but got txid 14726901.
at org.apache.hadoop.hdfs.server.namenode.MetaRecoveryContext.editLogLoaderPrompt(MetaRecoveryContext.java:94)
at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:212)
at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:140)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:829)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:684)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:281)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1032)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:748)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:538)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:597)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:764)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:748)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1441)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1507)



2016-05-20 18:53:58,322 FATAL namenode.NameNode (NameNode.java:main(1512)) - Failed to start namenode.
java.io.IOException: There appears to be a gap in the edit log. We expected txid 13013207, but got txid 14726901.
at org.apache.hadoop.hdfs.server.namenode.MetaRecoveryContext.editLogLoaderPrompt(MetaRecoveryContext.java:94)
at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:212)
at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:140)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:829)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:684)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:281)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1032)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:748)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:538)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:597)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:764)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:748)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1441)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1507)
2016-05-20 18:53:58,324 INFO util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1
2016-05-20 18:53:58,325 INFO namenode.NameNode (StringUtils.java:run(659)) - SHUTDOWN_MSG
1 ACCEPTED SOLUTION

avatar
Master Guru

@Anandha L Ranganathan

Please run below commands by root user.

1. Put Active NN in safemode

sudo -u hdfs hdfs dfsadmin -safemode enter

2. Do a savenamespace operation on Active NN

sudo -u hdfs hdfs dfsadmin -saveNamespace

3. Leave Safemode

sudo -u hdfs hdfs dfsadmin -safemode leave

4. Login to Standby NN

5. Run below command on Standby namenode to get latest fsimage that we saved in above steps.

sudo -u hdfs hdfs namenode -bootstrapStandby -force

View solution in original post

10 REPLIES 10

avatar
Guru

If this is a production cluster and you are on support, I suggest opening a support ticket since any tweaks can lead to data loss.

Before you more further, please take a back of NN metadata and edits from journal nodes.

avatar

Anandha L Ranganathan

standby namenode and journal node configurations were in a corrupted state, so that when the cluster tried to switch to the standby, you encountered the error that you reported.

Initially we have toW put the primary namenode into safemode and saved the namespace with the following commands:

hdfs dfsadmin -safemode enter hdfs dfsadmin -saveNamespace

su - hdfs -c "hdfs namenode -bootstrapStandby -force"

this was to make sure that the namenode was in a consistent state before we attempted to restart the HDFS components one last time to make sure all processes started cleanly and that HDFS would automatically leave safemode

avatar

I have faced the same issue, used same steps to get standby namenode up and it worked. If you have any questions in following above steps please let me know.

avatar
Master Guru

@Anandha L Ranganathan

Please run below commands by root user.

1. Put Active NN in safemode

sudo -u hdfs hdfs dfsadmin -safemode enter

2. Do a savenamespace operation on Active NN

sudo -u hdfs hdfs dfsadmin -saveNamespace

3. Leave Safemode

sudo -u hdfs hdfs dfsadmin -safemode leave

4. Login to Standby NN

5. Run below command on Standby namenode to get latest fsimage that we saved in above steps.

sudo -u hdfs hdfs namenode -bootstrapStandby -force

avatar
Expert Contributor

Thanks It worked. It was on our dev cluster and got into problem while upgrading to HDP 2.4 due to some manual error.

avatar
Master Guru

@Anandha L Ranganathan - Glad to hear that! 🙂

avatar
Contributor

Thanks @Kuldeep Kulkarni it worked for us as well.

avatar
Contributor

The only trick here is that if the failed namenode is offline (which it is, because, well, it's failed), the first 3 commands in the answer will fail because the hdfs shell can't talk to the failed namenode. My workaround was:

  1. Edit /etc/hosts on the working namenode to add the failed namenode hostname on the same line which defines the working node. E.g., 192.168.1.27 workingnode.domain.com workingnode => 192.168.27 workingnode.domain.com workingnode failednode.domain.com failednode
  2. Issue the first 3 commands listed in the answer.
  3. Undo the changes to /etc/hosts.
  4. Issue the 4th and 5th commands listed in the answer.

Is there a better way? Is there a way to force the working active namenode into safe mode even if the secondary is offline?

avatar

Hi @Jeff Arnold,

I tried to start the failed namenode on standbynamenode with above steps. I faced some error on running these command "sudo -u hdfs hdfs namenode -bootstrapStandby -force"

Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x000000008c800000, 1937768448, 0) failed; error='Cannot allocate memory' (errno=12)

#

# There is insufficient memory for the Java Runtime Environment to continue.

# Native memory allocation (mmap) failed to map 1937768448 bytes for committing reserved memory.

# An error report file with more information is saved as: # /var/log/hadoop/hdfs/hs_err_pid5144.log

Before executing the steps that you provided, I was facing these error while restarting namenode on standbynamenode via Ambari:

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 408, in <module>
    NameNode().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 103, in start
    upgrade_suspended=params.upgrade_suspended, env=env)
  File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 118, in namenode
    raise Fail("Could not bootstrap standby namenode")
resource_management.core.exceptions.Fail: Could not bootstrap standby namenode