Support Questions

Find answers, ask questions, and share your expertise

Why hdfs block size is 128 MB? Why it is not 100 MB or other?

avatar
Rising Star

Hi,

Hope all doing well.

I'm looking for reason why data block size is 128 MB in Hadoop 2.x? What logic we have used to define the size should be 128 MB? Why we didn't define 100MB?

1 ACCEPTED SOLUTION

avatar
Expert Contributor

The logic is quite simple: 128Mb is a multiple of "2" which means we can represent the number in binary like:

128Mb= 131072 Kb= 134217728 b = 1000000000000000000000000000 Binary

With this number we don't wast any bit when we stock data on memory

You can say that is a norme of storage of data in the computer science not just for big data

View solution in original post

3 REPLIES 3

avatar
Cloudera Employee

Mainly it's because of performance reasons.

Have a read through this: https://community.hortonworks.com/questions/27567/write-performance-in-hdfs.html#answer-27633

avatar
Rising Star

Thanks for reply.

But still i'm in doubt, Why it is not 126 MB or 132 MB?

avatar
Expert Contributor

The logic is quite simple: 128Mb is a multiple of "2" which means we can represent the number in binary like:

128Mb= 131072 Kb= 134217728 b = 1000000000000000000000000000 Binary

With this number we don't wast any bit when we stock data on memory

You can say that is a norme of storage of data in the computer science not just for big data