Support Questions
Find answers, ask questions, and share your expertise

hdfs is in critical in stage due to bad health space is not more than 20 gb than how to handle this

Highlighted

hdfs is in critical in stage due to bad health space is not more than 20 gb than how to handle this

New Contributor

hdfs is in critical in stage due to bad health space is not more than 20 gb than how to handle this

5 REPLIES 5
Highlighted

Re: hdfs is in critical in stage due to bad health space is not more than 20 gb than how to handle this

Expert Contributor

Hi  @Dharm,

 

What do you mean by bad health space here? Is HDFS Full and getting critical alerts either in Ambari or Cloudera Manager? Are you able to read and write on HDFS? Please can you elaborate as well it will be more helpful if you attach some screenshot to understand the issue?

 

Regards

 

Highlighted

Re: hdfs is in critical in stage due to bad health space is not more than 20 gb than how to handle this

New Contributor
Hi Team,
My disk space has utilized 92%. How to solve the problem.

Re: hdfs is in critical in stage due to bad health space is not more than 20 gb than how to handle this

Expert Contributor

Try to clean up some old files from your disk or else add some more space into the disk. 

Highlighted

Re: hdfs is in critical in stage due to bad health space is not more than 20 gb than how to handle this

Cloudera Employee

Adding to Jagadeesan comment. 

 

You can run disk balancer command to balance the disk space across your cluster.

 

If you are using a multinode hadoop cluster you can follow link - https://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-hdfs/HDFSDiskbalancer.html 

 

or if it is a single node cluster or to balance disk load with in a single data node use the following link - https://blog.cloudera.com/how-to-use-the-new-hdfs-intra-datanode-disk-balancer-in-apache-hadoop/

 

Let's Hadoop

Rajkumar.M

Highlighted

Re: hdfs is in critical in stage due to bad health space is not more than 20 gb than how to handle this

New Contributor
Can you take my system on any desk is it possible for you .

Requesting its urgent


In HDFS we have total space of 2.9 TB and 2.7 GB is consumed. Sharing the
screenshot below as.
*HDFS*
[image: image.png]


[image: image.png]

[image: image.png]

*For Impala *
[image: image.png]


*YARN*
[image: image.png]

[image: image.png]




- p-10-0-1-139.ap-south-1.compute.internal: Memory Overcommit Validation
Threshold

Suppress...
Memory on host ip-10-0-1-139.ap-south-1.compute.internal is
overcommitted. The total memory allocation is 108.6 GiB bytes but there are
only 124.6 GiB bytes of RAM (24.9 GiB bytes of which are reserved for the
system). Visit the Resources tab on the Host page for allocation details.
Reconfigure the roles on the host to lower the overall memory allocation.
Note: Java maximum heap sizes are multiplied by 1.3 to approximate JVM
overhead.
- ip-10-0-1-11.ap-south-1.compute.internal: Memory Overcommit Validation
Threshold

Suppress...
Memory on host ip-10-0-1-11.ap-south-1.compute.internal is
overcommitted. The total memory allocation is 120.7 GiB bytes but there are
only 124.6 GiB bytes of RAM (24.9 GiB bytes of which are reserved for the
system). Visit the Resources tab on the Host page for allocation details.
Reconfigure the roles on the host to lower the overall memory allocation.
Note: Java maximum heap sizes are multiplied by 1.3 to approximate JVM
overhead.
- ip-10-0-1-41.ap-south-1.compute.internal: Memory Overcommit Validation
Threshold

Suppress...
Memory on host ip-10-0-1-41.ap-south-1.compute.internal is
overcommitted. The total memory allocation is 120.7 GiB bytes but there are
only 124.6 GiB bytes of RAM (24.9 GiB bytes of which are reserved for the
system). Visit the Resources tab on the Host page for allocation details.
Reconfigure the roles on the host to lower the overall memory allocation.
Note: Java maximum heap sizes are multiplied by 1.3 to approximate JVM
overhead.