Welcome to the Cloudera Community

Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Who agreed with this topic

cloudera HDFS canary test fail

avatar
Explorer

hello, I'm trying to install CDH 5 using cloudera manager, after installation, I start HDFS, but this is an error , it says "HDFS canary tests fail" and I looked into the log file , the error messages are

 

 

 

PriviledgedActionException as:hdfs (auth:SIMPLE) cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create file/tmp/.cloudera_health_monitoring_canary_files/.canary_file_2014_12_22-14_15_06. Name node is in safe mode.
The reported blocks 28 needs additional 363 blocks to reach the threshold 0.9990 of total blocks 391.
The number of live datanodes 1 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached.

and

 

com.cloudera.cmon.firehose.polling.hdfs.HdfsCanary@3950d2fe for hdfs://25-219.priv29.nus.edu.sg:8020: Failed to create /tmp/.cloudera_health_monitoring_canary_files/.canary_file_2014_12_22-14_15_06  Details: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create file/tmp/.cloudera_health_monitoring_canary_files/.canary_file_2014_12_22-14_15_06. Name node is in safe mode.
The reported blocks 28 needs additional 363 blocks to reach the threshold 0.9990 of total blocks 391.
The number of live datanodes 1 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached.
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1323)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2507)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2397)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:550)
	at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClientProtocol.java:108)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:388)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)

 

I'm using single node mode(only one IP), It is because the namenode is in safe mode? what is the cause of this and how can I fix it?

thanks in advance

Who agreed with this topic