Created 11-02-2017 03:42 PM
I read lot of articles advising about fastest solutions to compute datasets.
I saw that Hive / TEZ is 100x faster than Hive / MapReduce, but Spark is 100x faster than Hive (TEZ or MR not mentionned ;-)), and finally, "it depends if you compute huge datasets or not".
My first question is : from what size can I consider a "huge" datasets ? I presume the number of rows and columns is significant...
My second question is : what if I am querying few partitions from a large dataset ? I think it comes to querying a small dataset ?
Created 11-02-2017 05:29 PM
Hi @Sebastien F Hive has been documented at running on 300+ PB of raw storage at Facebook. The largest cluster is 4,500+ nodes at Yahoo. Yahoo Japan was able to run 100,000 queries per hour and LLAP ran 100 million rows/s per node.
Hive\Tez scales to 100's of PB. LLAP is meant for smaller data sets (1-10 TB) which are typical for standard BI type workloads. With that being said, LLAP allows you to utilize SSD for cache so you can extend this to 100's TB (if you can afford that much SSD storage).
Hope this helps!
Created 11-02-2017 04:04 PM
...and, I always wondered how benchmarks are performed, is it just a timing of an execution on a "clear" plateform ?
Created 11-02-2017 05:29 PM
Hi @Sebastien F Hive has been documented at running on 300+ PB of raw storage at Facebook. The largest cluster is 4,500+ nodes at Yahoo. Yahoo Japan was able to run 100,000 queries per hour and LLAP ran 100 million rows/s per node.
Hive\Tez scales to 100's of PB. LLAP is meant for smaller data sets (1-10 TB) which are typical for standard BI type workloads. With that being said, LLAP allows you to utilize SSD for cache so you can extend this to 100's TB (if you can afford that much SSD storage).
Hope this helps!
Created 11-03-2017 08:03 AM
Hi @Scott Shaw ; it helps 🙂 thanks a lot.