Member since
11-18-2024
5
Posts
3
Kudos Received
0
Solutions
11-21-2024
12:02 AM
1 Kudo
Thank you @rki_ ! That is absolutely what happened. I had a node that the /tmp/ folder still contained old journalnode data. After cleaning it up and doing initializeSharedEdits i managed to start cluster. Note: I had this exact exception on two slave nodes: WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage java.io.IOException: There appears to be a gap in the edit log. We expected txid 121994, but got txid 121998. I did hdfs namenode -recover on both slave nodes and then was able to start both namenodes propely. The data is replicated within all 3 nodes. Thank you so much for the help!
... View more