Archives of Support Questions (Read Only)

This is an archived board for historical reference. Information and links may no longer be available or relevant
Announcements
This board is archived and read-only for historical reference. To ask a new question, please post a new topic on the appropriate active board.

HDFS BlockPlacementPolicy, is there an alternative that considers available disk space?

avatar
Rising Star

Hello,

I am wondering if there is an BlockPlacementPolicy that in addition to storing replicas safely on different racks as the default one does, also can consider how much disk space that is available on different nodes?

In case where you have a cluster that consists of two sets of machines with a big difference in the amount of available disk space, the default policy will lead to the disks of the set with a smaller amount of disk space running out of disk space long before you actually reach your total HDFS capacity.

Is there any such policy ready to be used?

Best Regards

Thomas

1 ACCEPTED SOLUTION

avatar
Rising Star

I just found that something like this was added somewhat recently:

https://github.com/apache/hadoop/blob/f67237cbe7bc48a1b9088e990800b37529f1db2a/hadoop-hdfs-project/h...

This seems to be what I was looking for.

View solution in original post

1 REPLY 1

avatar
Rising Star

I just found that something like this was added somewhat recently:

https://github.com/apache/hadoop/blob/f67237cbe7bc48a1b9088e990800b37529f1db2a/hadoop-hdfs-project/h...

This seems to be what I was looking for.