Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

what is huge datasets for Hive ?

avatar

I read lot of articles advising about fastest solutions to compute datasets.

I saw that Hive / TEZ is 100x faster than Hive / MapReduce, but Spark is 100x faster than Hive (TEZ or MR not mentionned ;-)), and finally, "it depends if you compute huge datasets or not".

My first question is : from what size can I consider a "huge" datasets ? I presume the number of rows and columns is significant...

My second question is : what if I am querying few partitions from a large dataset ? I think it comes to querying a small dataset ?

1 ACCEPTED SOLUTION

avatar

Hi @Sebastien F Hive has been documented at running on 300+ PB of raw storage at Facebook. The largest cluster is 4,500+ nodes at Yahoo. Yahoo Japan was able to run 100,000 queries per hour and LLAP ran 100 million rows/s per node.

Hive\Tez scales to 100's of PB. LLAP is meant for smaller data sets (1-10 TB) which are typical for standard BI type workloads. With that being said, LLAP allows you to utilize SSD for cache so you can extend this to 100's TB (if you can afford that much SSD storage).

Hope this helps!

View solution in original post

3 REPLIES 3

avatar

...and, I always wondered how benchmarks are performed, is it just a timing of an execution on a "clear" plateform ?

avatar

Hi @Sebastien F Hive has been documented at running on 300+ PB of raw storage at Facebook. The largest cluster is 4,500+ nodes at Yahoo. Yahoo Japan was able to run 100,000 queries per hour and LLAP ran 100 million rows/s per node.

Hive\Tez scales to 100's of PB. LLAP is meant for smaller data sets (1-10 TB) which are typical for standard BI type workloads. With that being said, LLAP allows you to utilize SSD for cache so you can extend this to 100's TB (if you can afford that much SSD storage).

Hope this helps!

avatar

Hi @Scott Shaw ; it helps 🙂 thanks a lot.