Community Articles

Find and share helpful community-sourced technical articles.
Labels (1)
avatar

Picture the scene...

You've run the HDFS Balancer on your cluster and have your data balanced nicely across your DataNodes on HDFS. Your cluster is humming along nicely, but your system administrator runs across the office to you with alerts about full disks on one of your DataNode machines! What now?

The Low-Down

Uneven data distribution amongst disks isn't dangerous as such, though in some rare cases you may start to notice the fuller disks becoming bottlenecks for I/O. As of Apache Hadoop 2.7.3, it is not possible to balance disks within a single node (aka intra-node balancing) - the HDFS balancer only balances across DataNodes and not within them. HDFS-1312 is tracking work to introduce this functionality into Apache HDFS, but it will not be available before Hadoop 3.0.

The conservative approach:

Modify the following property to your HDFS configurations or add it if it isn't already there: dfs.datanode.du.reserved (reserved space in bytes per volume). This will always leave this much space free on all DataNode disks. Set it to a value that will make your sysadmin happy and continue to use the HDFS balancer as before until HDFS-1312 is complete.

The brute force method (careful!):

Run fsck and MAKE SURE there are no under-replicated blocks (IMPORTANT!!). Then just wipe the contents of the offending disk. HDFS will re-replicate those blocks elsewhere automatically! NOTE: Do not wipe more than one disk across the cluster at a time!!

8,134 Views