Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Hadoop Block Size

avatar
Contributor

Hello, I have below queries :

Let me start with this, hard disk has multiple sectors and hard disk block size are usually 4 KB. Now this block size is physical block on Hard disk

Now on top of this we will install Operating System which will install FileSystem and these days these filesystem have logical block size as 4 KB. This block size is configurable

1. If it is then how can we configure this.

2. How logical blocks is arranged on physical hard disk ex: if logical block size is set to 16 KB, will the OS allocates continuous internal physical blocks of harddisk which is of 4KB size and hence for our logical block there will be a total of four 4 KB blocks linear to each other ??

This question i have since on top of Unix OS we will be installing Hadoop and HDFS has a block size of 64 or 128 MB. And due to this huge block size it will be easy to read and write .

I have confusion, since ultimately the data on these blocks will be finally stored on physical hard disk blocks which is just of 4 KB.

1 ACCEPTED SOLUTION

avatar
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login
4 REPLIES 4

avatar
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login

avatar
Master Guru

I think that question was well answered in this thread.

https://community.hortonworks.com/questions/27567/write-performance-in-hdfs.html#answer-27633

If I can quote myself, here the explanation of why Hadoop block size is much larger than a local filesystem block

"2. As HDFS is a Virtual Filesystem, the data it stores will ultimately stored on Underlying Operating system (Most cases, Linux). Assuming Linux has Ext3 File system (whose block size is 4KB), How does having 64MB/128 MB Block size in HDFS help ? Does the 64 MB Block in HDFS will be split into x * 4 KB Blocks of underlying Opertating system ?"

Again small additional comment to what Kuldeep said. A block is just a Linux file so yes all the underlying details of ext3 or whatever still apply. The reason blocks are so big is not because of the storage on the local node but because of the central FS storage

- To have a distributed filesystem you need to have a central FS Image that keeps track of ALL the blocks on ALL the servers in the system. In HDFS this is the Namenode. He keeps track of all the files in HDFS and all the blocks on all the datanodes. This needs to be in memory to support the high number of concurrent tasks and operations happening in an hadoop cluster so the traditional ways of setting this up ( btrees ) doesn't really work. It also needs to cover all nodes on all discs so he needs to keep track of thousands of nodes with tens of thousands of drives. Doing that with 4kb blocks wouldn't scale so the block size is around 128MB on most systems. ( You can count roughly 1GB of Namenode memory for 100TB of data )

- If you want to process a huge file in HDFS you need to run a parallel task on it ( MapReduce, Tez, Spark , ... ) In this case each task gets one block of data and reads it. It might be local or not. Reading a big 128 MB block or sending him over the network is efficient. Doing the same with 30000 4KB files would be very inefficient.

That is the reason for the block size.

avatar
Contributor

Thanks @drussell @Benjamin Leonhardi for your amazing responses, it did helped me a lot

There are few more queries which is little out of hadoop window :

1. Like Hadoop block does even our local unix file system ex: Ext3 or 4 stores the data in terms of logical blocks ( not in disk block size ). If it is then can we configure that local filesystem block size to be of higher capacity.

2. How data is actually stored in windows , is it similar to UNIX as blocks.

avatar
Master Guru

1) You can enable some parameters like huge file support in Linux to make it faster but nothing near 128MB. You just don;t get much benefit there

If you want to know how to tune the filesystem you can refer to our reference architecture

Appendix B

https://hortonworks.com/wp-content/uploads/2013/10/4AA5-9017ENW.pdf

- Mounting filesystems with nodiratime, noatime

- Change block readahead to 8k

Some file systems can also configure huge file support.

2) Windows

Basically yes. NTFS has something they call a cluster. But it seems to be very similar to a block

https://support.microsoft.com/en-gb/kb/140365