Initially when the environment was built there were around 327.33 GB out of total 1 TB Disk Capacity
Hence the HDFS dfsadmin report showed non-dfs usage as 327.33 GB
But after cleaning up of 300GB of on data from the Fileystem , dfsadmin report still show the non-dfs usage as 327.33 GB itself , while the reserved diskspace is 10GB
how could i get the Non-DFS utilisation refreshed post clean-up of local files on Linux Filesystem ?
hdfs dfsadmin -report
Name: 18.104.22.168:2004 (hpc123.xyz.com)Hostname:hpc123.xyz.comRack: /Row7/Rack2Decommission Status : NormalConfigured Capacity: 1154570731520 (1.05 TB)DFS Used: 449671168 (428.84 MB)Non DFS Used: 351465279488 (327.33 GB)DFS Remaining: 802655780864 (747.53 GB)DFS Used%: 0.04%DFS Remaining%: 69.52%Configured Cache Capacity: 4294967296 (4 GB)Cache Used: 0 (0 B)Cache Remaining: 4294967296 (4 GB)Cache Used%: 0.00%Cache Remaining%: 100.00%Xceivers: 2Last contact: Thu Apr 12 08:18:16 PDT 2018
@Johnny_Bach Try deleting logs from your local nodes and see if that make some space, look for yarn.nodemanager.local-dirs parameter location.