Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

HDFS Block size 1Gb/2GB

avatar
Rising Star

CDH enterprise 5.14.0

 

I am trying to use larger block size like 1GB/2GB.

 

In our case the files are 5GB to 14GB size and we process whole file per mapper,

 

is there any side effects to using larger block size like 1GB, 2GB? Like HDFS stability when doing replication?

1 ACCEPTED SOLUTION

avatar
Mentor
There are a few cons to raising your block size:

- Increased cost of recovery during write failures

When a client is writing a new block into the DataNode pipeline and one of the DataNode fails, there is a enabled-by-default recovery feature that will attempt to refill the gap in the replicated pipeline by transferring the partially written block from one of the remaining good DataNodes to a new DataNode.

When this happens, the client is blocked (the outstream,write(…) caller is blocked in the API code). With increased block size, the time waited will also increase greatly depending on how much of the partial block data was written before the failure occurred.

A worst-case wait example would involve the time required for network-copying 1.99 GiB for a 2 GiB block size because an involved DN may have failed at that specific point.

- Cost of replication caused by DataNode loss or decommission

When a DataNode is lost or is being decommissioned, the system has to react by re-filling the gaps in replica counts it creates. With smaller block sizes this activity is easy to spread randomly across the cluster, as several different nodes overall can take part in the re-replicate process. With larger blocks, only a few DNs can participate, and another consequence could be more lopsided space usage across DNs.

That said, use of 1-2 GiB is not unheard of and I've seen a few large clusters apply that as their default block size. Its just worth being aware of the cons, looking out for such impact and tuning accordingly as you go.

HDFS certainly functions at its best for large sized files, and your usage seems in accordance with that.

View solution in original post

2 REPLIES 2

avatar
Master Collaborator

Hi @sbpothineni Why not using CombineFileInputFormat?

avatar
Mentor
There are a few cons to raising your block size:

- Increased cost of recovery during write failures

When a client is writing a new block into the DataNode pipeline and one of the DataNode fails, there is a enabled-by-default recovery feature that will attempt to refill the gap in the replicated pipeline by transferring the partially written block from one of the remaining good DataNodes to a new DataNode.

When this happens, the client is blocked (the outstream,write(…) caller is blocked in the API code). With increased block size, the time waited will also increase greatly depending on how much of the partial block data was written before the failure occurred.

A worst-case wait example would involve the time required for network-copying 1.99 GiB for a 2 GiB block size because an involved DN may have failed at that specific point.

- Cost of replication caused by DataNode loss or decommission

When a DataNode is lost or is being decommissioned, the system has to react by re-filling the gaps in replica counts it creates. With smaller block sizes this activity is easy to spread randomly across the cluster, as several different nodes overall can take part in the re-replicate process. With larger blocks, only a few DNs can participate, and another consequence could be more lopsided space usage across DNs.

That said, use of 1-2 GiB is not unheard of and I've seen a few large clusters apply that as their default block size. Its just worth being aware of the cons, looking out for such impact and tuning accordingly as you go.

HDFS certainly functions at its best for large sized files, and your usage seems in accordance with that.