Member since
04-03-2018
2
Posts
0
Kudos Received
0
Solutions
02-11-2020
08:54 AM
Problem Description (The Crash) :
During an HDFS rolling restart, the Standby NameNode (SBNN) failed to load the FsImage, causing the SBNN to crash and interrupting the process of rolling restart (that is actually good news). At this stage, the SBNN is down (if it is shown as “started” in CM, you can manually stop it via the CM Web UI), but the Active NameNode (ANN) is still active and operates the HDFS service properly.
This means,
The service is still up and the client can issue Read / Write requests over HDFS.
Do NOT by any means try to restart the ANN.
On the SBNN logs, the following stack trace can be observed:
2020-02-10 13:51:56,845 ERROR org.apache.hadoop.hdfs.server.namenode.FSImage: Failed to load image from FSImageFile(file=/data/nn2/dfs/nn/current/fsimage_0000000007432739660, cpktTxId=0000000007432739660)
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.namenode.INodeDirectory.addChild(INodeDirectory.java:536)
at org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.addToParent(FSImageFormatPBINode.java:274)
at org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINodeDirectorySection(FSImageFormatPBINode.java:211)
at org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:265)
at org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:184)
at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:226)
...
2020-02-10 13:51:57,018 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
Root Cause
While the stack trace displays a NullPointerException, it often means that the FsImage is corrupted and the SBNN is not able to parse it correctly.
Solution
To resolve this issue, manually “bootstrap” the SBNN base on the ANN. To do that, copy the content of the ANN dfs.namenode.name.dir and paste it in the SBNN dfs.namenode.name.dir.
If this does not work, it means that the ANN FsImage is also corrupted and needs to be repaired before performing the following steps:
Put the ANN in safemode to prevent any write on the FS: hdfs dfsadmin -fs hdfs://<ANN_FQDN>:<ANN_PORT> -safemode enter
Save the namespace of the ANN to create a new FsImage: hdfs dfsadmin -fs hdfs://<ANN_FQDN>:<ANN_PORT> -saveNamespace
Check the value of dfs.namenode.name.dir for the ANN and SBNN: dfs.namenode.name.dir = /data/nn1/dfs/nn,/data/nn2/dfs/nn
Take a backup the content of the SBNN dfs.namenode.name.dir: cp -pr /data/nn1/dfs/nn /data/nn1/dfs/nn_bak_<date_time>
cp -pr /data/nn2/dfs/nn /data/nn2/dfs/nn_bak_<date_time>
Carefully clear the content of the SBNN dfs.namenode.name.dir: rm -rf /data/nn1/dfs/nn
rm -rf /data/nn2/dfs/nn
From the ANN, scp the content of the ANN dfs.namenode.name.dir into the SBNN: scp -pr /data/nn1/dfs/nn SBNN:/data/nn1/dfs
scp -pr /data/nn2/dfs/nn SBNN:/data/nn2/dfs
On the SBNN delete the lock file on the dfs.namenode.name.dir: rm -f /data/nn1/dfs/nn/in_use.lock
rm -f /data/nn2/dfs/nn/in_use.lock
Start the SBNN from the CM UI.
While the SBNN is starting and the FsImage is loaded, leave the safemode on the ANN:
hdfs dfsadmin -fs hdfs://<ANN_FQDN>:<ANN_PORT> -safemode leave
The SBNN should start normally (if the ZKFC is not started, start it to notify both namenode that the SBNN is back on track).
Now, complete the HDFS rolling restart.
If the SBNN fails to start with the same stack trace as above (failed to load FsImage), this means that the FsImage of the ANN is also corrupted and needs to be fixed before doing the above steps.
Conclusion
When this issue occurs in a rolling restart (i.e only one of the namenode is down), it is possible to solve it with minimum downtime (only the safemode will disturb the running applications).
The fact that the FsImage was preserved on the ANN allowed us to “bootstrap” it to the SBNN and let it play the subsequent edit present in the dfs.namenode.name.dir, bringing it up to date and starting doing its actual job: doing checkpoints.
... View more
Labels:
12-06-2019
12:09 PM
The following always worked for me: kinit -kt hdfs.keytab hdfs hadoop fs -mkdir /benchmarks hadoop fs -chmod 0777 /benchmarks You can always lock down the directory permissions to only allow a certain group to write to this directory.
... View more