Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

CDH installation: Failed to create HDFS directory /tmp.

avatar
Contributor

Hi, I am installing CDH with CM (Path C) , but I can't create HDFS directory /tmp successfully. The log shows:

 

2017-05-03 22:17:27,838 WARN org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker: Space available on volume '/dev/disk/by-uuid/526841aa-13c2-4953-94ee-992b7f2fe6c9' is 0, which is below the configured reserved amount 104857600

2017-05-03 22:17:27,838 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: NameNode low on available disk space. Already in safe mode.

2017-05-03 22:17:27,838 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode is ONResources are low on NN. Please add or free up more resources then turn off safe mode manually. NOTE: If you turn off safe mode before adding resources, the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode leave" to turn safe mode off.

 

+ /opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hadoop-hdfs/bin/hdfs --config /opt/cloudera-manager/cm-5.4.7/run/cloudera-scm-agent/process/60-hdfs-NAMENODE-createtmp dfs -mkdir -p /tmp mkdir: Cannot create directory /tmp. Name node is in safe mode.

+ '[' 1 -eq 0 ']' + echo 'Unable to create directory /tmp; see stderr log.'

+ exit 1

 

but when I check with 'df -h' , it shows I should have enough disk space:

 

lrwxrwxrwx 1 root root 10 Apr 22 09:35 /dev/disk/by-uuid/526841aa-13c2-4953-94ee-992b7f2fe6c9 -> ../../sda1

 

Filesystem Size Used Avail Use% Mounted on

udev 32G 4.0K 32G 1% /dev

tmpfs 6.3G 1.6M 6.3G 1% /run

/dev/sda1 212G 124G 78G 62% /

......

 

BTW, I have re-installed several times.

Any suggestion will be appreciated.

1 ACCEPTED SOLUTION

avatar
Contributor

Got it fixed!

Directory '/dfs/nn' has been removed before, but 'dfs/dn' is still there! After I removed all these directories, it works!

View solution in original post

3 REPLIES 3

avatar
Champion

Could you check your hdfs-site.xml

for the below  parameter

 

dfs.namenode.resource.du.reserved

looks like you have allocated only 100MB - i can see that in you error log.

Increase it accordingly  and restart the namenode . it should fix the problem

avatar
Champion

@BobinGZ

 

Go to the path it mentioned and run the command "du -sh" and make sure it is greater than reserved amount 104857600

avatar
Contributor

Got it fixed!

Directory '/dfs/nn' has been removed before, but 'dfs/dn' is still there! After I removed all these directories, it works!