Created 02-09-2018 11:51 AM
our ambary cluster with HDFS 100%
I try to capture the trash size with - hadoop dfsadmin -report
but from the output I not see the trash size , so how to know the size of trash ?
and what are the CLI to get all used partition on HDFS include trash ?
second in order to solve the HDFS 100%
we think to add disks on the worker machine , is it usful in our case ?
if yes how to re balance the data on the new disks ?
Created 02-09-2018 02:22 PM
Want to get a detailed solution you have to login/registered on the community
Register/LoginCreated 02-09-2018 02:22 PM
Want to get a detailed solution you have to login/registered on the community
Register/LoginCreated 02-09-2018 02:40 PM
can you please show me how to decommission? before and after ?
Created 02-09-2018 02:58 PM
in Ambari, go to the host details, and there you can click on the button right to the 'Datanode HDFS' service line .screenshot-decomission.png
You should turn on maintenance mode before to avoid alerts.
Created 02-09-2018 02:44 PM
from the output I see that
hdfs dfs -du -h / 398.2 M /app-logs 7.5 M /apps 3.5 M /ats 695.6 M /hdp 0 /mapred 0 /mr-history 53.1 G /spark2-history 0 /tmp 516.3 M /user is it possible to delete the spark history from CLI , if not how to delete the spark history from ambari GUI?
Created 02-09-2018 02:54 PM
for the trash dir, try also to execute the command without the / at the end.
Created 02-09-2018 03:01 PM
To delete /spark2-history:
hdfs dfs -rm -r /spark2-history/*
Created 02-09-2018 02:57 PM
yes trash dir is small and isn't the problem