Created on 06-26-2017 01:00 AM - edited 09-16-2022 04:49 AM
While we doing tpc-ds testing on impala+kudu vs impala+parquet(according to https://github.com/cloudera/impala-tpcds-kit), we found that for most of the queries, impala+parquet is 2times~10times faster than impala+kudu.
Is any body ever did the same testing?
ps:We are running kudu 1.3.0 with cdh 5.10.
Created 06-26-2017 01:19 AM
Created 06-26-2017 03:24 AM
Created 06-26-2017 08:41 AM
We'd expect Kudu to be slower than Parquet on a pure read benchmark, but not 10x slower - that may be a configuration problem. We've published results on the Cloudera blog before that demonstrate this: http://blog.cloudera.com/blog/2017/02/performance-comparing-of-different-file-formats-and-storage-en...
Parquet is a read-only storage format while Kudu supports row-level updates so they make different trade-offs. I think we have headroom to significantly improve the performance of both table formats in Impala over time.
E.g. in Impala 2.9/CDH5.12 IMPALA-5347 and IMPALA-5304 improve pure Parquet scan performance by 50%+ on some workloads, and I think there are probably similar opportunities for Kudu.
Created 06-26-2017 10:46 AM
@mbigelow, You've brought up a good point that HDFS is going to be strong for some workloads, while Kudu will be better for others. It's not quite right to characterize Kudu as a file system, however. Kudu is a distributed, columnar storage engine. In other words, Kudu provides storage for tables, not files. So in this case it is fair to compare Impala+Kudu to Impala+HDFS+Parquet.
Created 06-26-2017 11:25 PM
Thanks all for your reply, here is some detail about the testing.
We are running impalad+kudu on 14 nodes,
nodes info:
cpu model : Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
cpu cores: 32
mem: 128G
disk: 4T*12, sas
impalad and kudu are installed on each node, with 16G MEM for kudu, and 96G MEM for impalad.
parquet files are stored on another hadoop cluster with about 80+ nodes(running hdfs+yarn).
We are running tpc-ds queries(https://github.com/cloudera/impala-tpcds-kit) .
With the 18 queries, each query were run with 3 times, (3 times on impala+kudu, 3 times on impala+parquet)and then we caculate the average time. While compare to the average query time of each query,we found that kudu is slower than parquet. Here is the result of the 18 queries:
We are planing to setup an olap system, so we compare impala+kudu vs impala+parquet to see which is the good choice.
Created 06-27-2017 03:06 PM
Make sure you run COMPUTE STATS after loading the data so that Impala knows how to join the Kudu tables.
What is the total size of your data set?
I am surprised at the difference in your numbers and I think they should be closer if tuned correctly. Regardless, if you don't need to be able to do online inserts and updates, then Kudu won't buy you much over the raw scan speed of an immutable on-disk format like Impala + Parquet on HDFS.
Created 06-27-2017 03:50 PM
Can you also share how you partitioned your Kudu table?
Created 06-27-2017 09:05 PM
1, Make sure you run COMPUTE STATS: yes, we do this after loading data
2, What is the total size of your data set?
impala tpc-ds tool create 9 dim tables and 1 fact table,
which dim tables are small(record num from 1k to 4million+ according to the datasize generated),
and the fact table is big, here is the 'data siez-->record num' of fact table:
512g<-->4224587147
256g<-->2112281549
64g<-->528071062
3, Can you also share how you partitioned your Kudu table?
for the dim tables, we hash partition it into 2 partitions by their primary (no partition for parquet table),
for the fact table, we range partition it into 60 partitions by its 'data field'(parquet partition into 1800+ partitions),
for those tables create in kudu, their replication factor is 3.
Created 06-27-2017 09:29 PM
Created 06-27-2017 09:30 PM
If you are under the scale limits consider increasing # of partitions. Impala tends to use one thread per partition when scanning.
Created 06-28-2017 02:44 AM
Created 06-28-2017 09:38 AM
Impala heavily relies on parallelism for throughput so if you have 60 partitions for Kudu and 1800 partitions for Parquet then due to Impala's current single-thread-per-partition limitation you have built in a huge disadvantage for Kudu in this comparison.
Please let us know if you re-run your comparison test.
Created 07-02-2017 07:57 PM
I have been re-run the test, and kudu perform much better this time(though it's still a little bit slower than parquet), thanks for @mpercy's suggestion.
I changed two things by re-runing the test:
1, increase the partitions for the fact table from 60 to 768(affact all queries)
2, change the query3.sql 'or' predicate into 'in' predicate, so predicate can push down to kudu(only affact query 3)
below is the re-run result:
(column 'kudu60' is the previous result, which means the partitions of fact table is 60 )
(column 'kudu768' is the new result, which means the partitions of fact table is 768)