Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

what is huge datasets for Hive ?

Solved Go to solution

what is huge datasets for Hive ?

New Contributor

I read lot of articles advising about fastest solutions to compute datasets.

I saw that Hive / TEZ is 100x faster than Hive / MapReduce, but Spark is 100x faster than Hive (TEZ or MR not mentionned ;-)), and finally, "it depends if you compute huge datasets or not".

My first question is : from what size can I consider a "huge" datasets ? I presume the number of rows and columns is significant...

My second question is : what if I am querying few partitions from a large dataset ? I think it comes to querying a small dataset ?

1 ACCEPTED SOLUTION

Accepted Solutions

Re: what is huge datasets for Hive ?

Hi @Sebastien F Hive has been documented at running on 300+ PB of raw storage at Facebook. The largest cluster is 4,500+ nodes at Yahoo. Yahoo Japan was able to run 100,000 queries per hour and LLAP ran 100 million rows/s per node.

Hive\Tez scales to 100's of PB. LLAP is meant for smaller data sets (1-10 TB) which are typical for standard BI type workloads. With that being said, LLAP allows you to utilize SSD for cache so you can extend this to 100's TB (if you can afford that much SSD storage).

Hope this helps!

3 REPLIES 3
Highlighted

Re: what is huge datasets for Hive ?

New Contributor

...and, I always wondered how benchmarks are performed, is it just a timing of an execution on a "clear" plateform ?

Re: what is huge datasets for Hive ?

Hi @Sebastien F Hive has been documented at running on 300+ PB of raw storage at Facebook. The largest cluster is 4,500+ nodes at Yahoo. Yahoo Japan was able to run 100,000 queries per hour and LLAP ran 100 million rows/s per node.

Hive\Tez scales to 100's of PB. LLAP is meant for smaller data sets (1-10 TB) which are typical for standard BI type workloads. With that being said, LLAP allows you to utilize SSD for cache so you can extend this to 100's TB (if you can afford that much SSD storage).

Hope this helps!

Re: what is huge datasets for Hive ?

New Contributor

Hi @Scott Shaw ; it helps :) thanks a lot.