Support Questions

Find answers, ask questions, and share your expertise

How to delete non-DFS stoage data

Explorer

sudo -u hdfs hdfs dfsadmin -report
Configured Capacity: 860500408320 (801.40 GB)
Present Capacity: 1417964708 (1.32 GB)
DFS Remaining: 322059428 (307.14 MB)
DFS Used: 1095905280 (1.02 GB)
DFS Used%: 77.29%
Replicated Blocks:
Under replicated blocks: 520
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
Low redundancy blocks with highest priority to recover: 520
Pending deletion blocks: 0
Erasure Coded Block Groups:
Low redundancy block groups: 0
Block groups with corrupt internal blocks: 0
Missing block groups: 0
Low redundancy blocks with highest priority to recover: 0
Pending deletion blocks: 0

-------------------------------------------------
Live datanodes (1):

Name: 192.168.24.32:50010 (xxxxxxxxxxxxx)
Hostname: gaian-lap386.com
Decommission Status : Normal
Configured Capacity: 860500408320 (801.40 GB)
DFS Used: 1095905280 (1.02 GB)
Non DFS Used: 808787101696 (753.24 GB)
DFS Remaining: 322059428 (307.14 MB)
DFS Used%: 0.13%
DFS Remaining%: 0.04%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 6
Last contact: Mon Nov 18 12:31:36 IST 2019
Last Block Report: Mon Nov 18 12:24:12 IST 2019
Num of Blocks: 522

 

1 ACCEPTED SOLUTION

Expert Contributor

Hi@Manoj690 

 

1.  First check which is the directory configured for dfs data storage.

Login to Ambari UI -> Services-> HDFS -> Configs -> [search for dfs.datanode.data.dir]

Capture this list of directories defined here.

EG. i have list as below -
/data01/hadoop/hdfs/data,/data02/hadoop/hdfs/data

2. Login to the datanodes and go to the mount.

In my case- $cd /data01/hadoop/hdfs/

3. Check if there is any directory/data inside "/data01/hadoop/hdfs/" or "/data01/"

4. Other data than "**data" is consider as nondfs and which will show you in dfsadmin -report.

5. You need to get rid of those data which will lesser your NON DFS used.

 

You can share your output if you have any confusion/need help.

 

View solution in original post

2 REPLIES 2

Expert Contributor

Hi@Manoj690 

 

1.  First check which is the directory configured for dfs data storage.

Login to Ambari UI -> Services-> HDFS -> Configs -> [search for dfs.datanode.data.dir]

Capture this list of directories defined here.

EG. i have list as below -
/data01/hadoop/hdfs/data,/data02/hadoop/hdfs/data

2. Login to the datanodes and go to the mount.

In my case- $cd /data01/hadoop/hdfs/

3. Check if there is any directory/data inside "/data01/hadoop/hdfs/" or "/data01/"

4. Other data than "**data" is consider as nondfs and which will show you in dfsadmin -report.

5. You need to get rid of those data which will lesser your NON DFS used.

 

You can share your output if you have any confusion/need help.

 

Explorer
in configs i didnt find ds.datanode.data.dir
Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.