Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

How to set worker HDFS disk space ?

How to set worker HDFS disk space ?

New Contributor

Hi

I installed Cloudera Manager using Cloudera Repo and with the commande line

sudo yum install cloudera-manager-daemons cloudera-manager-server

I installed all services needed automatically and all is working fine.

My cluster has 5 nodes :

- 1 master node

- 1 utility node

- 3 workers node

Each node has a CentOS Operating System each with 2 To of disk space

After the automatic installation of Cloudera, I see in the Cloudera Manager Web Application that each node contains "only" 135 Go of HDFS file system insteed of the 2 To expected.

When I log on each node I can see with "df -h" that there is really 2 To of global disk space and with "hdfs dfs -df -h" only 135 Go of HDFS storage.

Why HDFS is only 135 Go ? I dont have set it  nowhere during the installation.

How to increase this storage to 1500 Go ?

I saw the config file hdfs-site.xml with "dfs.namenode.name.dir" value set to "file:///dfs/nn" but I dont really understand where it is...

Thanx for your help.

1 REPLY 1
Highlighted

Re: How to set worker HDFS disk space ?

New Contributor

Reply for myself.

I found some things...

/dfs/dn IS a directory in the CentOS filesystem. It is a subdirectory in / (root)

On my CentOS the /home contains the 2 To of disk...

So I think that I must to "move" space from /home to / in order to get more space in dfs directory.

I need to do this steps in each datanode...

Or add /home in dfs.datanode.data.dir and reach dfs.datanode.du.reserved to a highest value than the default 5Gb

Don't have an account?
Coming from Hortonworks? Activate your account here