Support Questions

Find answers, ask questions, and share your expertise

Increase HDFS capacity with additional disks

avatar
Explorer

I have a 2 Data Node cluster with each Data Node has a large capacity disk used for HDFS. This disk is mounted at /hadoop. Now I need add more storage to the cluster. According to the suggestion from this question, I need create a new mount point, for example /extraDisk and mount the disk here. Then, I nded create another direcotry /extraDisa/hdfsData. After that, add DataNode directories" field under HDFS -> Configs -> Settings tab, as a comma separated value. Here are my questions.

1. Since I have 2 Data Nodes, do I have to repeat above steps on both Data Nodes?

2. What if I have different sized disks on different Data Node, can I still use above steps?

3. How can I just add one disk to one of the Data Node?

1 ACCEPTED SOLUTION

avatar
Master Mentor

@Harry Li

Question1

You only need to create identical mount points on both datanodes and this will be mapped to the dfs.datanode.data.dir

Question 2

You can have disks of different sizes but only advisable to have identical sizes on all nodes

Question3

Haven't tested but I think you can add only one disk the smaller disk will fill up faster, so at some point, they will not allow anymore write operations and the cluster will have no way to balance itself out.

HTH

View solution in original post

3 REPLIES 3

avatar
Master Mentor

@Harry Li

Question1

You only need to create identical mount points on both datanodes and this will be mapped to the dfs.datanode.data.dir

Question 2

You can have disks of different sizes but only advisable to have identical sizes on all nodes

Question3

Haven't tested but I think you can add only one disk the smaller disk will fill up faster, so at some point, they will not allow anymore write operations and the cluster will have no way to balance itself out.

HTH

avatar
Master Mentor

@Harry Li

Any updates on this? Please take the time to answer

avatar
Explorer

Yes. And I accepted the answer. Thanks.