Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

cases where changing hadoop block size is not recommended ?

avatar
Expert Contributor

Hi Team,

consider a hadoop cluster with default block size of 64Mb , we have a case wherein we would like to make use of hadoop for storing historical data and retrieving it as per need

historical data would be in form of archive containing many small files (millions) , so thats the reason we would like to reduce default block size in hadoop to 32MB ?

I also understand that changing default size to 32MB may adversely affect if we plan to use that cluster for applications which ,

store files which are huge in size ,

so can anyone advise what to do in such situations

1 ACCEPTED SOLUTION

avatar
Super Collaborator
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login
10 REPLIES 10

avatar
Super Guru
@ripunjay godhani

Before I answer your question, please read the following discussion which will help you understand why larger block sizes are required for Hadoop.

https://community.hortonworks.com/questions/51408/hdfs-federation-1.html

Now, assuming you have read above link, yo understand why small files will not work with Hadoop. So not only that you need a 64 MB block size, you actually should bump it up to 128 MB (That is the default in HDP).

This is not a bad news for your use case. There are literally 1000 plus deployments at this point where historical data is archived in Hadoop. Why do you have small files? Are those files small because the whole table is a few MB (less than 64 MB)? What is the total amount of data are you looking to offload into Hadoop? Once we know this, we can answer better but offloading historical data is a classic hadoop use case and you shouldn't run into small files problem.

avatar
Expert Contributor

@mqureshiHi our application generate small size xml files which are stored on NAS and XML associated metadata in DB .

plan is to extract metadata from DB into 1 file and compress xml into 1 huge archive say 10GB , suppose each archive is 10GB and data is 3 months old

so i wanted to know best solution for storing and accessing this archived data in hadoop --> HDFS/HIVE/Hbase

Please advise what do you think will be the better approach for reading this archived data

suppose i am storing this archived data in hive so how do i retrieve this archived data

Please guide me for storing archived data in hive

also guide for Retrieving/reading archived data from hive when needed

avatar
Super Guru

This sounds pretty simple. Here is how I would do it but you can follow your own path.

1. Import XML archive data into Hadoop. My next step is optional but to me that's the right way to do it.

2. I will flatten XML into Avro and then ORC (lot of material available on this). I will use nested type to retain XML structure and its going to be more efficient when reading.

https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-ComplexTyp...

https://orc.apache.org/docs/types.html

Like I said, this is optional. You can keep your data in XML and directly read XML from hive.

3. I will initially keep compression enabled with Snappy but might disable it if the data set is not too large and queries bottleneck on CPU.

That's pretty much it. It's pretty straight forward use case.

avatar
Expert Contributor

@mqureshiThanks a lot for your help and for guiding 🙂 . thanks for explaining in detail

avatar
Super Collaborator
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login

avatar
Expert Contributor
@hduraiswamy

i appreciate your inputs

Please advice on how to store and read archive data from hive

while storing data in hive , should i save it as .har in hdfs ?

our application generatee small size xml files which are stored on NAS and XML associated metadata in DB .

plan is to extract metadata from DB into 1 file and compress xml into 1 huge archive say 10GB , suppose each archive is 10GB and data is 3 months old

so i wanted to know best solution for storing and accessing this archived data in hadoop --> HDFS/HIVE/Hbase

Please advise what do you think will be the better approach for reading this archived data

suppose i am storing this archived data in hive so how do i retrieve this archived data

Please guide me for storing archived data in hive

also guide for Retrieving/reading archived data from hive when needed

avatar
Super Collaborator

I think this question is similar to this one https://community.hortonworks.com/questions/79103/what-is-the-best-way-to-store-small-files-in-hadoo... and I have posted my answer there.

avatar
Expert Contributor

avatar
Master Mentor

Something I'd like to suggest is the following based on the assumption storage savings is the primary goal here

1. Leverage HDFS tiered storage tier called ARCHIVE http://www.ebaytechblog.com/2015/01/12/hdfs-storage-efficiency-using-tiered-storage/

2. Erasure Coding is a new mechanism soon to be delivered in HDP that promises same fault tolerance guarantees as replication factor of 3 but with 1.5x storage savings. Which means you no longer need to store 3 block replicas but only 1.5 of that. https://hadoop.apache.org/docs/r3.0.0-alpha1/hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.htm

I'd consider these paths before reducing block size.