As the NameNode Report and UI (including ambari UI) shows that your DFS used is reaching almsot 87% to 90% hence it will be really good if you can increase the DFS capacity.
In order to understand in detail about the Non DFS Used = Configured Capacity - DFS Remaining - DFS Used
YOu can refer to the following article which aims at explaining the concepts of Configured Capacity, Present Capacity, DFS Used,DFS Remaining, Non DFS Used, in HDFS. The diagram below clearly explains these output space parameters assuming HDFS as a single disk.
The above is one of the best article to understand the DFS and Non-DFS calculations and remedy.
You add capacity by giving dfs.datanode.data.dir more mount points or directories. In Ambari that section of configs is I believe to the right depending the version of Ambari or in advanced section, the property is in hdfs-site.xml. the more new disk you provide through comma separated list the more capacity you will have. Preferably every machine should have same disk and mount point structure.
The HDFS dashboard metrics widget "HDFS Disk Usage" shows: \The percentage of distributed file system (DFS) used, which is a combination of DFS and non-DFS used.
So can you just put your mouse over the "HDFS Disk Usage" widget and then see what is the different values do you see there for "DFS Used" , "non DFS Used" and Remaining. You should see something like following:
HDFS originally splits and stores the data in blocks. Each block is 64MB or 128MB by default based on your HDFS version. Consider a file which is of size (2MB) is stored in a block. The remaining 62MB (considering the default size to be 64MB) is not used by HDFS. Which means here the HDFS used space is 64MB but the actual hard disk used space is 2MB.