Thanks for coming to the community with your question. I'm Josh.
We have a pretty extensive engineering blog post outlining namenode recovery tools, one of which is namenode recovery mode.
A few code snippets stand out. One is the command used to start a namenode in recovery mode.
"./bin/hadoop namenode -recover"
And the other is the text one is greeted with when running the above command
You have selected Metadata Recovery mode. This mode is intended to recover
lost metadata on a corrupt filesystem. Metadata recovery mode often
permanently deletes data from your HDFS filesystem. Please back up your edit
log and fsimage before trying this!
In short, namenode recovery mode checks the edits log for errors, and asks you what you'd like to do about them.
There is one thing I'm curious about. Are you asking this out of curiosity, or do you have an HDFS problem you're trying to solve? Please let me know if you have any other questions or if you'd like further assistance.
I'm glad to be of service. Let me know if there's anything else namenode related that you're curious about. :)
I can't speak for other vendors, but I don't think we offer certifications for single components. Generally, working with hadoop effectively requires familiarity with multiple components, so it behooves oneself to learn a few components in the stack. The closest certifications I can think of are the data engineer and data analyst certifications. There's a link bellow to our full list of certifications for reference. I hope this helps.
I am confused since when I run this job I get
18/10/12 16:14:07 WARN common.Storage: Storage directory /tmp/hadoop-root/dfs/name does not exist
18/10/12 16:14:07 WARN namenode.FSNamesystem: Encountered exception loading fsimage
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-root/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.