Having problem where HDFS (HDP v3.1.0) is running out of storage space (which is also causing problems with spark jobs hanging in ACCEPTED mode). I assume that there is some configuration where I can have HDFS use more of the storage space already present on the node hosts, but exactly what was not clear from quick googling. Can anyone with more experience help with this?
In ambari UI, I see...

(from ambari UI)

(from NameNode UI).
Yet when looking at the overall hosts via ambari UI, there appears to be still a good amount of space left on the cluster hosts (each node excluding the first in this list has a total of 140GB)

Not sure what setting are relevant, but here are the general setting in ambari:

My interpretation of the "Reserved Space for HDFS" setting is that it shows there should be 13GB reserved for non-DFS (ie. local FS) storage, so does not seem to make sense that HDFS is already running out of space. Am I interpreting this wrongly? Any other HDFS configs that should be shown in this question?
I assume that there is some configuration where I can have HDFS use more of the storage space already present on the node hosts, but exactly what was not clear from quick googling. Can anyone with more experience help with this? Also if anyone could LMK if this may be due to other problems I am not seeing?