Created on 02-11-2020 08:54 AM - edited on 02-11-2020 11:16 PM by VidyaSargur
During an HDFS rolling restart, the Standby NameNode (SBNN) failed to load the FsImage, causing the SBNN to crash and interrupting the process of rolling restart (that is actually good news). At this stage, the SBNN is down (if it is shown as “started” in CM, you can manually stop it via the CM Web UI), but the Active NameNode (ANN) is still active and operates the HDFS service properly.
This means,
On the SBNN logs, the following stack trace can be observed:
2020-02-10 13:51:56,845 ERROR org.apache.hadoop.hdfs.server.namenode.FSImage: Failed to load image from FSImageFile(file=/data/nn2/dfs/nn/current/fsimage_0000000007432739660, cpktTxId=0000000007432739660)
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.namenode.INodeDirectory.addChild(INodeDirectory.java:536)
at org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.addToParent(FSImageFormatPBINode.java:274)
at org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINodeDirectorySection(FSImageFormatPBINode.java:211)
at org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:265)
at org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:184)
at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:226)
...
2020-02-10 13:51:57,018 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
While the stack trace displays a NullPointerException, it often means that the FsImage is corrupted and the SBNN is not able to parse it correctly.
To resolve this issue, manually “bootstrap” the SBNN base on the ANN. To do that, copy the content of the ANN dfs.namenode.name.dir and paste it in the SBNN dfs.namenode.name.dir.
If this does not work, it means that the ANN FsImage is also corrupted and needs to be repaired before performing the following steps:
hdfs dfsadmin -fs hdfs://<ANN_FQDN>:<ANN_PORT> -safemode enter
hdfs dfsadmin -fs hdfs://<ANN_FQDN>:<ANN_PORT> -saveNamespace
dfs.namenode.name.dir = /data/nn1/dfs/nn,/data/nn2/dfs/nn
cp -pr /data/nn1/dfs/nn /data/nn1/dfs/nn_bak_<date_time>
cp -pr /data/nn2/dfs/nn /data/nn2/dfs/nn_bak_<date_time>
rm -rf /data/nn1/dfs/nn
rm -rf /data/nn2/dfs/nn
scp -pr /data/nn1/dfs/nn SBNN:/data/nn1/dfs
scp -pr /data/nn2/dfs/nn SBNN:/data/nn2/dfs
rm -f /data/nn1/dfs/nn/in_use.lock
rm -f /data/nn2/dfs/nn/in_use.lock
hdfs dfsadmin -fs hdfs://<ANN_FQDN>:<ANN_PORT> -safemode leave
If the SBNN fails to start with the same stack trace as above (failed to load FsImage), this means that the FsImage of the ANN is also corrupted and needs to be fixed before doing the above steps.
When this issue occurs in a rolling restart (i.e only one of the namenode is down), it is possible to solve it with minimum downtime (only the safemode will disturb the running applications).
The fact that the FsImage was preserved on the ANN allowed us to “bootstrap” it to the SBNN and let it play the subsequent edit present in the dfs.namenode.name.dir, bringing it up to date and starting doing its actual job: doing checkpoints.