Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

HDFS to many bad blocks due to" Operation category WRITE is not supported in state standby" - Understanding why datanode can't find Active NaneNode

avatar
New Contributor

Recently I want to upgrade our cluster from 2.6.5 to 3.1.3 but failed.so I rollback the version to old.but some strange things hanppend .our cluster's datanode can't report the block's situation to the the Active NameNode. so the datanode throw this exception "

3af4e5df93df216b02eb73adca57392.png

"

I don't know why. The datanode throw this problem all the time. And the NameNode Web UI show the "There are xxx missing blocks. The following files may be corrupted" the information but the number of missing blocks still rising。。。。really scary 

I don't know what happend to our cluster.. ....

 

 

2 REPLIES 2

avatar
Community Manager

@datafiber Welcome to our community! To help you get the best possible answer, I have tagged in our HDFS experts @SVB @Asok @rki_ who may be able to assist you further.

Please feel free to provide any additional information or details about your query, and we hope that you will find a satisfactory solution to your question.



Regards,

Vidya Sargur,
Community Manager


Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.
Learn more about the Cloudera Community:

avatar
Super Collaborator

Hi @datafiber it seems like your Namenode is in Safe mode, not sure why it went into safe mode, but you can try taking it out manually and then retry the operation and monitor the logs.

 

run the below commands from NN.

# hdfs dfsadmin -safemode leave
# hdfs dfsadmin -safemode status