Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

High inodes on HDFS nodes

avatar
Master Collaborator

Hi,

 

I have a small cluster with 3 physical DNs each with 12 disks.

 

The cluster is balanced

 

The total cluster has only 1 M objets but still alerting on high inodes cross the DNs.

 

hdfs@slpr-aha01:/root$ hdfs dfs -count /
59360 419232 4341695319595 /

 

Sample of df -i on one of the nodes

 

/dev/sda3 441184 374965 66219 85% data/server_hdfs/data/disk1
/dev/sdb1 476960 402434 74526 85% data/server_hdfs/data/disk2
/dev/sdc1 476960 396556 80404 84% data/server_hdfs/data/disk3

===================

Cluster Summary

Security is OFF
508220 files and directories, 441240 blocks = 949460 total.
Heap Memory used 1.05 GB is 34% of Commited Heap Memory 3.05 GB. Max Heap Memory is 7.11 GB.
Non Heap Memory used 49.66 MB is 97% of Commited Non Heap Memory 51 MB. Max Non Heap Memory is 130 MB.
Configured Capacity:64.80 TB
DFS Used:8.16 TB
Non DFS Used:317.69 MB
DFS Remaining:56.64 TB
DFS Used%:12.60%
DFS Remaining%:87.40%
Block Pool Used:8.16 TB
Block Pool Used%:12.60%
DataNodes usages:Min %Median %Max %stdev %
  12.48%12.61%12.70%0.09%
Live Nodes:3 (Decommissioned: 0)

 

3 REPLIES 3

avatar
Mentor
Could you share your drive formatting options? The overall inode capacity
seems to be very low - did you format with some special options for "fewer,
larger files" perhaps?

avatar
Master Collaborator
Hi Harsh,

This is the drive formatting for this cross all the farms. We were fine
with these for years.

Still wondering if such inode definition for 3 nodes is enough for 700 K
objects.

I deleted the jobcache and usercache and delete un needed files and drop
the objects from 900 K to 700 K but insides still 82%.

This makes me wondering what else other than the HDFS file can affect this.

avatar
Master Collaborator

@Harsh J For my special case where the hadoop nodes uptime was 1200 days and the servers with old centos versions, restarted the servers took the inodes down from 88% to 10%.