Support Questions
Find answers, ask questions, and share your expertise

Master node Full disk


I installed Cloudera using PATH B installation in 4 machines (VMs, Centos 7) 1 master and 3 slaves, after installation i get an error in clock synchronization in every slave, I resolve it when I do :

systemctl start ntpd 

After a few minutes I get an error in master node and i can't display cloudera page (master:7180) although cloudera-scm-server status is running.

I noticed afterwards that the hard drive of Master node is full: when I do : df -h

I get :

[root@master ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 34G 34G 20K 100% /
devtmpfs 4.1G 0 4.1G 0% /dev
tmpfs 4.1G 0 4.1G 0% /dev/shm
tmpfs 4.1G 8.7M 4.1G 1% /run
tmpfs 4.1G 0 4.1G 0% /sys/fs/cgroup
/dev/sda1 497M 212M 286M 43% /boot
/dev/mapper/centos-home 17G 36M 17G 1% /home
tmpfs 833M 0 833M 0% /run/user/0

I thought that maybe the ntpd log is behind all that.

if / dir is full (use% = 100%) so the master can't desplay any think.

Any help please to resolve this, and avoid hard disk bombardment of Master node.

This is the third I'm trying to install cloudera and every time I have the same problem.





Can you run the following commands as root and identify which particular folder is consuming more space. Also once it returns a result, use the below command again with that folder name and dig further until you reach the correct sub folder


$ du -sh /

$ du -sh /*


Note: This is a disk space issue, I don't find anything related to memory in your description. So the topic and description are confusing 


when i do : du -sh / 

i get : 


du: cannot access ‘/proc/4982/task/4982/fd/4’: No such file or directory
du: cannot access ‘/proc/4982/task/4982/fdinfo/4’: No such file or directory
du: cannot access ‘/proc/4982/fd/4’: No such file or directory
du: cannot access ‘/proc/4982/fdinfo/4’: No such file or directory
34G     /


Cloudera Employee

Try running 'du -h / --max-depth=3|grep G' to figure out which path is using that space. Then drill down from there.







I found the files using that space :


-rw-------. 1 cloudera-scm cloudera-scm 359M Mar 27 14:40 mgmt_mgmt-NAVIGATOR-9a89af62abe8393b48c78926720ffe2c_pid28766.hprof

It is repeated 40 times.


And : 

-rw-------. 1 cloudera-scm cloudera-scm 761M Mar 27 15:10 mgmt_mgmt-NAVIGATORMETASERVER-9a89af62abe8393b48c78926720ffe2c_pid11739.hprof

It is repeated 12 times.

How to resolve this ?


You can opt for stoping navigator, as navigator write huge amount of logs and your cluster can run without this service as well.

Rising Star
The .hprof files are memory dumps created when a Java process fails due to lack of memory. It could be that either the server itself has insufficient memory or that the Navigator configuration does not allocate enough memory to the JVM. How much RAM does the VM running the master have?


The master is running over a VM of 9GB RAM.


Hi Jim,

How to change the Navigator configuration to allocate enough memory to the JVM

Super Guru

Hello @ghandrisaleh,


This is a bit off topic, but you can configure Navigator Metadata Server Heap in Cloudera Manager via "Java Heap Size of Navigator Metadata Server in Bytes"


Navigator Audit Server: 
Java Heap Size of Auditing Server in Bytes