Created 02-08-2018 08:25 AM
Hi! Our cluster is filling up, around 80% of the total space is used. This causes several disks to be overutilized - they passed critical threshold of 95% usage. We are running rebalancer but results are little and rather delayed. I wonder if there is any threshold above which no data is written to disks. Or maybe Data Node will write data until it takes 100% of available disk space? Thank you in advance!
Created 04-03-2018 05:20 AM
Created 02-27-2018 08:36 AM
Hi @lizard
By default a DataNode writes new block replicas to disk volumes solely on a round-robin basis. You can configure a volume-choosing policy that causes the DataNode to take into account how much space is available on each volume when deciding where to place a new replica.
source: https://www.cloudera.com/documentation/enterprise/latest/topics/admin_dn_storage_balancing.html
NB: Do you remove all the HDFS Trash files in paths (/user/impala/Trash/*, (/user/hdfs/Trash/*...).
Good luck man.
Created 03-07-2018 02:23 AM
Hi Lizard,
in the linked documentation you could have find good data, however in case of immediate need to rebalance disks inside the DataNode you can as well run a disk balancer (note that this is different from the HDFS Balancer).
Disk balancer info are here:
Cheers,
Pifta
Created 03-07-2018 05:27 AM
jFYI: strange behavior of a disk balancer
Created 03-12-2018 02:03 AM
Hi!
Many thanks to all for your answers. We managed to delete some data that turned out not to be needed and mitigated the problem this way. Yes, we use Storage Balancing on DN and also have 'disk balancer' enabled (fortunately didn't run into problems described by @koc) I was just wondering if there is any rule that would 'turn off' writing on Data Node if it exceeds the threshold, eg. there is lest than 100GB left on all its disks together. Didn't find anything like this in hdfs-default.xml file. So I wonder what would happen if we let writing data until 100% of HDFS space is used on some nodes - would it really reach 100% or start to prefer other nodes over the filled up one. This would probably interfere with replica placement based on the topology of the cluster, so I can imagine there is no mechanism like this. Let me know what you think about it.
Thanks!
Created 03-12-2018 05:35 AM
Hi @lizard,
if HDFS DataNode reaches max capacity on a disk, it will not use it, as the allocation of a new block is checking the available space on the disk.
This check is considering the dfs.du.reserve setting as well, so if you reserve for example 10GB of space, and a disk has less the 10GB+blocksize free space, a block allocation will not happen on the disk.
If a DataNode is completely full, and there are no further disks where at least one block can be allocated, that can cause block allocation issues on the HDFS level. Also if no disk space is available, that can result in issues on the DataNode level during internal DataNode operations, that is why we suggest to size a cluster in a way that you have about 25% free space available as a good minimum.
Cheers,
Pifta
Created 04-03-2018 04:06 AM
Hi @pifta,
Many thanks for your explanations. So the conclusion is: the disks would get filled up even after they reached critical level of let's say 90% utilization and only when there is barely no space left new blocks wouldn't be assigned to them. However all the time nodes with less data would be preferred for the sake of maintaining balanced cluster. So thereotically it's possible to drive the cluster to the point of 100% utilization where operations are not possible for reasons you mentioned in your post. Good to know, thank you for sharing your knowledge.
Created 04-03-2018 05:20 AM
Created 04-09-2018 08:51 AM