Support Questions

Find answers, ask questions, and share your expertise

issue with Log Directory Free Space , This role's log directory is on a filesystem with less than

avatar
Contributor

Hi 

I am facing critical warning on CDH manager interface for log directory  

 

This role's log directory is on a filesystem with less than 5.0 GiB of its space free. /var/log/hadoop-hdfs (free: 119.0 MiB (0.24%), capacity: 49.1 GiB).While on my system I can see I have the root directory filled , but i do have space in home directory. 

 

I would like to know how and what prorpoerty i need to chnage to make log to /home instead of small root . i didn't get any link to fix this issue ,if you can just point me to write info it will be really helpful here 

 

 

[root@hadoop-vm2 subdir0]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_hadoopvm2-lv_root
50G 47G 111M 100% /
tmpfs 15G 8.0K 15G 1% /dev/shm
/dev/sda1 477M 63M 389M 14% /boot
/dev/mapper/vg_hadoopvm2-lv_home
742G 55G 650G 8% /home
cm_processes 15G 5.3M 15G 1% /var/run/cloudera-scm-agent/process
[root@hadoop-vm2 subdir0]#

 

2 REPLIES 2

avatar
Visit any service's configuration page. In the search box, type in
"logging". This will show you all the locations where log files are
written for this service. Make the changes you wish to and restart the
roles. Ensure that every relevant host on the cluster has this new
directory created and the filesystem permissions are correct.

Regards,
Gautam Gopalakrishnan

avatar
Contributor

Hi Gautam

My issue is I think with filled space , after troubleshooting i found i was earlier using /dfs/dn for HDFS block storage , later i added non OS partition under /home (/home/hdfs/dfs/dn) and then started importing  100 of GB data.  Looks like some how my old path /dfs/dn had also sotred some of HDFS blocks and filled that root partition. Sais so if now by chnagging the configuration  remove (/dfs/dn)  dfs.data.dir  and restart cluster will it do automatic move data to only left location /home/hdfs/dfs/dn or how to handle that. I guess this will fix my problem for now.

 

Do not worry about data much what ever best and quick will be fine .

 

 

[root@hadoop-vm2 /]# du -sh ./*
7.9M ./bin
61M ./boot
4.0K ./cgroup
196K ./dev
40G ./dfs
30M ./etc
55G ./home
12K ./impala
263M ./lib
27M ./lib64
16K ./lost+found
4.0K ./media
0 ./misc
4.0K ./mnt
0 ./net
3.5G ./opt
du: cannot access `./proc/20676/task/20676/fd/4': No such file or directory
du: cannot access `./proc/20676/task/20676/fdinfo/4': No such file or directory
du: cannot access `./proc/20676/fd/4': No such file or directory
du: cannot access `./proc/20676/fdinfo/4': No such file or directory
0 ./proc
92K ./root
15M ./sbin
4.0K ./selinux
4.0K ./srv
0 ./sys
1.2M ./tmp
2.9G ./usr
387M ./var
223M ./yarn
[root@hadoop-vm2 /]# ls /dfs/dn/current/
BP-1505211549-172.28.172.30-1424252944658 VERSION