Support Questions
Find answers, ask questions, and share your expertise

Cluster hosts have more storage space than HDFS seems to recognize / have access to? How to increase HDFS storage use?

Expert Contributor

Having problem where HDFS (HDP v3.1.0) is running out of storage space (which is also causing problems with spark jobs hanging in ACCEPTED mode). I assume that there is some configuration where I can have HDFS use more of the storage space already present on the node hosts, but exactly what was not clear from quick googling. Can anyone with more experience help with this?

 

In ambari UI, I see...

Capture001.PNG

 (from ambari UI) 

Capture002.PNG

 (from NameNode UI).

Yet when looking at the overall hosts via ambari UI, there appears to be still a good amount of space left on the cluster hosts (each node excluding the first in this list has a total of 140GB) 

Capture003.PNG

Not sure what setting are relevant, but here are the general setting in ambari: 

Capture004.PNG

 My interpretation of the "Reserved Space for HDFS" setting is that it shows there should be 13GB reserved for non-DFS (ie. local FS) storage, so does not seem to make sense that HDFS is already running out of space. Am I interpreting this wrongly? Any other HDFS configs that should be shown in this question?

I assume that there is some configuration where I can have HDFS use more of the storage space already present on the node hosts, but exactly what was not clear from quick googling. Can anyone with more experience help with this? Also if anyone could LMK if this may be due to other problems I am not seeing?

1 REPLY 1

Super Collaborator
Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.