Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

How much actual space required to store 10GB to HDFS? And HBase ?

avatar
Explorer

HI,

Let’s assume 10GB of file to store in HDFS. Block size of the cluster is 256MB, replication factor as 3 and I am using 3 datanodes.

Now, this 10GB of data requires how much space in every Datanode, NameNode and secondary NameNode. ( I am really interesting to understand about space utilization of NameNode and Secondary NameNode)

Also how much space required to store the same data in HBase.

Thanks

1 ACCEPTED SOLUTION

avatar
Master Guru

HDFS:

- You need ( per default ) 30GB on the datanodes ( 3x replication )

- On the namenode the space is negligible you have 40 blocks x 3 = 120 blocks ~ 12 kbytes of RAM on the Namenode. ( you need around 100byte of RAM for every block in the Namenode memory. You also need a bit of space on disc but thats even less in the fsimage ( files but not blocks are stored on disc, however namenodes need a bit more since they also store edits and multiple versions of the fsimage. But still very small.

HBase:

More complicated question. In hbase it depends on the way you store data. Every field in your hbase table is stored in HFiles together with the key, the fieldname, the timestamp ... So if you store it in a single field per row your storage is much less than if you would have hundreds of 2 byte columns. On the other hand you can also enable compression in Hbase so that reduces space.

View solution in original post

4 REPLIES 4

avatar
Master Guru

HDFS:

- You need ( per default ) 30GB on the datanodes ( 3x replication )

- On the namenode the space is negligible you have 40 blocks x 3 = 120 blocks ~ 12 kbytes of RAM on the Namenode. ( you need around 100byte of RAM for every block in the Namenode memory. You also need a bit of space on disc but thats even less in the fsimage ( files but not blocks are stored on disc, however namenodes need a bit more since they also store edits and multiple versions of the fsimage. But still very small.

HBase:

More complicated question. In hbase it depends on the way you store data. Every field in your hbase table is stored in HFiles together with the key, the fieldname, the timestamp ... So if you store it in a single field per row your storage is much less than if you would have hundreds of 2 byte columns. On the other hand you can also enable compression in Hbase so that reduces space.

avatar
Super Guru

@Arunkumar Dhanakumar

Since replication factor is by default 3, using Benjamin's approach for sizing, 10 GB will be per data node. That is very simplified and assumes big files. Additional to Benjamin's response, let's keep in mind that the block size matter. The calculation presented above is rough-order of magnitude and it does not account that data could be as many small files that may not fill the blocks. For example, your block size could be 256 MB and you stored 100 files of 1 KB. That could take 100 x 256 MB. Also, compression plays a role here. It depends the type of compression used, etc. Also, if you data is stored as ORC, you could have your 10 GB data reduced at 3 GB data even without compression.

avatar
Explorer

Thanks for the additional info. I was really curious to understand about the NameNode disk utilization before.

Since my cluster load above 1GB of file always, so it’s OK to have 256MB of block size now or I may improve it later on.

Right now I am loading the data as a text file. So the compression has to come from the local file system (tar or gz) or is there any default compression technique which is available in hdfs native command(setup) ?

I know few of the native compression technique is available for HBase, what would the better compression algorithms when storing the text data.

I curious to understand about minimum disk utilization and better performance.

Thanks.

avatar
Master Guru

@Arunkumar Dhanakumar

You can simply compress text files before you upload them. Common codecs include gzip, snappy and lzo. HDFS does not care. All Mapreduce/Hive/pig jobs support these standard codecs and identify them by their file extension.

If you use gzip you just need to make sure that each file is not too big since its not splittable. I.e. each gzip file will result in one mapper.

You can also compress the output of jobs. So you could run a pig job that reads the text files and writes them again. I think you simply need to add the name .gz for example to the output. Again you need to understand that now each part file is gzipped and will run in one mapper later. Lzo and snappy on the other hand are splittable but do not provide as good a compression.

http://stackoverflow.com/questions/4968843/how-do-i-store-gzipped-files-using-pigstorage-in-apache-p...