Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Configure Storage capacity of Hadoop cluster

Solved Go to solution

Re: Configure Storage capacity of Hadoop cluster

@vinay kumar Whats the output of df -h in slave 4?

You can add /hadoop and restart HDFS and then you can remove other mounts from the settings.

Highlighted

Re: Configure Storage capacity of Hadoop cluster

Contributor

@Neeraj Sabharwal

I have attached an image in my previous comment for slave-4 df -h command. So if i remove the other directories...wouldn't it effect the existing cluster in any way ?

I am getting this error after removing other directories and replacing them with /hadoop . And yes,the cluster size has been increased.

2655-errorhdfs.png

Re: Configure Storage capacity of Hadoop cluster

@vinay kumar I was going to add /hadoop and then remove other directories after sometime.

Re: Configure Storage capacity of Hadoop cluster

Contributor
@Neeraj Sabharwal

I think now i got the clear picture of it. Since the mount / is partitioned with 400 GB we should use it alone to make use of that memory. But then that configuration is default configuration given by amabari. Wouldn't it affect the cluster in any way ? should i take care of anything ?

Re: Configure Storage capacity of Hadoop cluster

@vinay kumar We allocate dedicated disks for HDFS data. We have to modify the datanode dir setting during the install.

Re: Configure Storage capacity of Hadoop cluster

Contributor
@Neeraj Sabharwal

adding /hadoop and deleting other directories after some time is resulting in missing blocks. Is there any way to overcome this? When i run hdfs fsck command.Its showing that all block are missing. The reason for being could be the removal of directories. do we need to copy the data from directories into new directory(/hadoop) will that help ??

Re: Configure Storage capacity of Hadoop cluster

@vinay kumar

Maybe you have problem in disk partitioning. Can you check how much space you have allocated for partitions used by HDP?

Here's a link for partitioning recommendations http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_cluster-planning-guide/content/ch_partiti...

Re: Configure Storage capacity of Hadoop cluster

Contributor

I have allocated around 400 GB for / partition. PFA.

2650-df.png

Re: Configure Storage capacity of Hadoop cluster

Contributor

Hi @vinay kumar, I think your partitioning is wrong you are not using "/" for hdfs directory. If you want use full disk capacity, you can create any folder name under "/" example /data/1 on every data node using command "#mkdir -p /data/1" and add to it dfs.datanode.data.dir. restart the hdfs service.

You should get the desired output.