Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

HDFS disk usage is 100%

avatar

our ambary cluster with HDFS 100%

I try to capture the trash size with - hadoop dfsadmin -report

but from the output I not see the trash size , so how to know the size of trash ?

and what are the CLI to get all used partition on HDFS include trash ?

second in order to solve the HDFS 100%

we think to add disks on the worker machine , is it usful in our case ?

if yes how to re balance the data on the new disks ?

Michael-Bronson
1 ACCEPTED SOLUTION

avatar
Super Collaborator
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login
7 REPLIES 7

avatar
Super Collaborator
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login

avatar

can you please show me how to decommission? before and after ?

Michael-Bronson

avatar
Super Collaborator

in Ambari, go to the host details, and there you can click on the button right to the 'Datanode HDFS' service line .screenshot-decomission.png

You should turn on maintenance mode before to avoid alerts.

avatar

from the output I see that

hdfs dfs -du -h /
398.2 M  /app-logs
7.5 M    /apps
3.5 M    /ats
695.6 M  /hdp
0        /mapred
0        /mr-history
53.1 G   /spark2-history
0        /tmp
516.3 M  /user

is it possible to delete the spark history from CLI , if not how to delete the spark history from ambari GUI?

Michael-Bronson

avatar
Super Collaborator

for the trash dir, try also to execute the command without the / at the end.

avatar
Super Collaborator

To delete /spark2-history:

hdfs dfs -rm -r /spark2-history/*

avatar

yes trash dir is small and isn't the problem

Michael-Bronson