Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Releasing storage space from host

avatar
Expert Contributor

Hello,

I am working with a 3 node cluster with t2.large machines on AWS.

One of those hosts has reached 100% storage capacity.

Capacity Used: [100.00%, 10.7 GB], Capacity Total: [10.7 GB], path=/usr/hdp

What are the best practices to release some storage space from this hots? Deleting unnecessary services?

Thanks-

Wellington

1 ACCEPTED SOLUTION

avatar

Hi @Wellington De Oliveira in this case, standard linux practices apply, first of all find out what is consuming all the space, I'm guessing that you mean the OS root partition is full?

In which case it's probably logs that have filled up /var/log

Use a combination of:

df -h

and

du -h --max-depth=1 /path/of/interest

(where /path/of/interest is where you're investigating the space consumption.

Of course if you are talking about HDFS being full on one node, that's something else, let me know if that's the case.

View solution in original post

5 REPLIES 5

avatar

Hi @Wellington De Oliveira in this case, standard linux practices apply, first of all find out what is consuming all the space, I'm guessing that you mean the OS root partition is full?

In which case it's probably logs that have filled up /var/log

Use a combination of:

df -h

and

du -h --max-depth=1 /path/of/interest

(where /path/of/interest is where you're investigating the space consumption.

Of course if you are talking about HDFS being full on one node, that's something else, let me know if that's the case.

avatar
Expert Contributor

Thanks.

I did some investigation and here below is what I found out.

Is there some standard practices for releasing space like removing content that might be no so relevant? In the last command below I noticed that var/logs and var/cache sums up to almost 1GB. Are these folder that I could empty without affecting services?

[root@ip-172-31-34-25 /]# df -h

Sist. Arq. Tam. Usado Disp. Uso% Montado em

/dev/xvda2 10G 10G 9,2M 100% /

devtmpfs 3,9G 0 3,9G 0% /dev

tmpfs 3,7G 0 3,7G 0% /dev/shm

tmpfs 3,7G 17M 3,7G 1% /run

tmpfs 3,7G 0 3,7G 0% /sys/fs/cgroup

tmpfs 757M 0 757M 0% /run/user/1000

Then :

[root@ip-172-31-34-25 /]# du -h --max-depth=1 /

0/dev

du: não é possível acessar “/proc/2284/task/2284/fd/4”: Arquivo ou diretório não encontrado

du: não é possível acessar “/proc/2284/task/2284/fdinfo/4”: Arquivo ou diretório não encontrado

du: não é possível acessar “/proc/2284/fd/4”: Arquivo ou diretório não encontrado

du: não é possível acessar “/proc/2284/fdinfo/4”: Arquivo ou diretório não encontrado

0/proc

17M/run

0/sys

24M/etc

199M/root

2,4M/tmp

2,8G/var

4,9G/usr

115M/boot

75M/home

0/media

0/mnt

9,8M/opt

0/srv

0/data

0/cgroups_test

1,9G/hadoop

10G/

and going into more detail into var:

[root@ip-172-31-34-25 /]# du -h --max-depth=1 /var/

1,6G/var/lib

1009M/var/log

0/var/adm

246M/var/cache

8,0K/var/db

0/var/empty

0/var/games

0/var/gopher

0/var/local

0/var/nis

0/var/opt

0/var/preserve

28K/var/spool

48K/var/tmp

0/var/yp

0/var/kerberos

0/var/crash

2,8G/var/

Thanks!

Wellington

avatar

So I'd start looking at what log files are consuming space in /var/log remove some of the older ones that have rolled over etc should be pretty safe.

4.9GB in /usr seems a bit large too, maybe investigate what's consuming such a large percentage of your space in there too.

As usual, remove any unneeded packages at the O/S level of course.

10GB is honestly a bit small for a root partition, might want to bump that up a bit, or at least spinning up some extra storage to mount as /var and /usr to give yourself a bit more space.

Hadoop is very good at generating logs, so it's very easy to fill up a root partition if you're not careful and don't have it split off elsewhere.

avatar
Expert Contributor

Thanks!

Do you happen to know a easy way to add additional storage to those partition (its is hosted on the AWS) without compromising my current installation (3 node cluster running on t2. large).

Thanks-

avatar
Contributor

Hi,

In order to find the adequate space in any directory during installation or upgrade procedures, for example while doing HDP upgrade you should verify about the availability of adequate space on /usr/hdp for the target HDP version.

For that use below format:

df -h <Path_of_interest>

Example :

[alex@machine1]# df -h /usr/hdp/
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/system-root 528G 22G 506G 5% /
[alex@machine1]#

You can all parameters like Size of disk, used space, available spave and percentage of usage.