Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Why hdfs block size is 128 MB? Why it is not 100 MB or other?

Solved Go to solution

Why hdfs block size is 128 MB? Why it is not 100 MB or other?

Expert Contributor

Hi,

Hope all doing well.

I'm looking for reason why data block size is 128 MB in Hadoop 2.x? What logic we have used to define the size should be 128 MB? Why we didn't define 100MB?

1 ACCEPTED SOLUTION

Accepted Solutions

Re: Why hdfs block size is 128 MB? Why it is not 100 MB or other?

Expert Contributor

The logic is quite simple: 128Mb is a multiple of "2" which means we can represent the number in binary like:

128Mb= 131072 Kb= 134217728 b = 1000000000000000000000000000 Binary

With this number we don't wast any bit when we stock data on memory

You can say that is a norme of storage of data in the computer science not just for big data

3 REPLIES 3

Re: Why hdfs block size is 128 MB? Why it is not 100 MB or other?

New Contributor

Mainly it's because of performance reasons.

Have a read through this: https://community.hortonworks.com/questions/27567/write-performance-in-hdfs.html#answer-27633

Re: Why hdfs block size is 128 MB? Why it is not 100 MB or other?

Expert Contributor

Thanks for reply.

But still i'm in doubt, Why it is not 126 MB or 132 MB?

Re: Why hdfs block size is 128 MB? Why it is not 100 MB or other?

Expert Contributor

The logic is quite simple: 128Mb is a multiple of "2" which means we can represent the number in binary like:

128Mb= 131072 Kb= 134217728 b = 1000000000000000000000000000 Binary

With this number we don't wast any bit when we stock data on memory

You can say that is a norme of storage of data in the computer science not just for big data