Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

No space left on device VS hdfs dfsadmin -report

avatar
Contributor

Hortonworks Sandbox 2.6, vmware.

I've added /dev/sda4 partition with 61G (ext3), in Ambari added /usr01 folder in DataNode directories, hdfs is reporting 87.7G total now:

[root@sandbox hdfs]# hdfs dfsadmin -report
Safe mode is ON
Configured Capacity: 94168273920 (87.70 GB)
Present Capacity: 79834880512 (74.35 GB)
DFS Remaining: 59022965248 (54.97 GB)
DFS Used: 20811915264 (19.38 GB)
DFS Used%: 26.07%
Under replicated blocks: 12
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

But when I try to upload 2G file, error is rising:

Cannot create file ... Name node is in safe mode. Resources are low on NN. Please add or free up more resources then turn off safe mode manually.

and in name node logs:

[root@sandbox hdfs]# tail -n 20 hadoop-hdfs-namenode-sandbox.hortonworks.com.out
java.io.IOException: No space left on device
        at java.io.FileOutputStream.writeBytes(Native Method)
        at java.io.FileOutputStream.write(FileOutputStream.java:326)
        at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
        at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
        at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:295)
        at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141)
        at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
        at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:59)
        at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:324)
        at org.apache.log4j.RollingFileAppender.subAppend(RollingFileAppender.java:276)
        at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
        at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
        at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
        at org.apache.log4j.Category.callAppenders(Category.java:206)
        at org.apache.log4j.Category.forcedLog(Category.java:391)
        at org.apache.log4j.Category.log(Category.java:856)
        at org.apache.commons.logging.impl.Log4JLogger.info(Log4JLogger.java:176)
        at org.apache.hadoop.ipc.Server.logException(Server.java:2428)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2362)

1 ACCEPTED SOLUTION

avatar
Expert Contributor

@Triffids G

The dfsadmin report is not relevant in this case, the "No space left on device" concerns the NameNode, not the DataNodes. Check "dfs.namenode.name.dir", I'm pretty sure that it points to a volume that is in fact full. Note that you can use comma separated paths, so I'd suggest to add a directory from the newly added partition too and restart the NameNode.

View solution in original post

2 REPLIES 2

avatar
Expert Contributor

@Triffids G

The dfsadmin report is not relevant in this case, the "No space left on device" concerns the NameNode, not the DataNodes. Check "dfs.namenode.name.dir", I'm pretty sure that it points to a volume that is in fact full. Note that you can use comma separated paths, so I'd suggest to add a directory from the newly added partition too and restart the NameNode.

avatar
Contributor

Thanks for the answer, I found the reason. I've used web interface to upload file, looks like the interface tried to save in /tmp before moving to hdfs. The is a reason why I've got the error, no space left in linux filesystem. Second problem I forgot to execute

hdfs dfsadmin -safemode leave