Expert Contributor
Posts: 256
Registered: ‎01-25-2017

High inodes on HDFS nodes



I have a small cluster with 3 physical DNs each with 12 disks.


The cluster is balanced


The total cluster has only 1 M objets but still alerting on high inodes cross the DNs.


hdfs@slpr-aha01:/root$ hdfs dfs -count /
59360 419232 4341695319595 /


Sample of df -i on one of the nodes


/dev/sda3 441184 374965 66219 85% data/server_hdfs/data/disk1
/dev/sdb1 476960 402434 74526 85% data/server_hdfs/data/disk2
/dev/sdc1 476960 396556 80404 84% data/server_hdfs/data/disk3


Cluster Summary

Security is OFF
508220 files and directories, 441240 blocks = 949460 total.
Heap Memory used 1.05 GB is 34% of Commited Heap Memory 3.05 GB. Max Heap Memory is 7.11 GB.
Non Heap Memory used 49.66 MB is 97% of Commited Non Heap Memory 51 MB. Max Non Heap Memory is 130 MB.
Configured Capacity:64.80 TB
DFS Used:8.16 TB
Non DFS Used:317.69 MB
DFS Remaining:56.64 TB
DFS Used%:12.60%
DFS Remaining%:87.40%
Block Pool Used:8.16 TB
Block Pool Used%:12.60%
DataNodes usages:Min %Median %Max %stdev %
Live Nodes:3 (Decommissioned: 0)


Posts: 1,537
Kudos: 277
Solutions: 234
Registered: ‎07-31-2013

Re: High inodes on HDFS nodes

Could you share your drive formatting options? The overall inode capacity
seems to be very low - did you format with some special options for "fewer,
larger files" perhaps?
Backline Customer Operations Engineer
Expert Contributor
Posts: 256
Registered: ‎01-25-2017

Re: High inodes on HDFS nodes

Hi Harsh,

This is the drive formatting for this cross all the farms. We were fine
with these for years.

Still wondering if such inode definition for 3 nodes is enough for 700 K

I deleted the jobcache and usercache and delete un needed files and drop
the objects from 900 K to 700 K but insides still 82%.

This makes me wondering what else other than the HDFS file can affect this.
Expert Contributor
Posts: 256
Registered: ‎01-25-2017

Re: High inodes on HDFS nodes

@Harsh J For my special case where the hadoop nodes uptime was 1200 days and the servers with old centos versions, restarted the servers took the inodes down from 88% to 10%.