Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

kudu is slower than parquet?

avatar
Contributor

While we doing tpc-ds testing on impala+kudu vs impala+parquet(according to https://github.com/cloudera/impala-tpcds-kit), we found that for most of the queries, impala+parquet is 2times~10times faster than impala+kudu.
Is any body ever did the same testing?


ps:We are running kudu 1.3.0 with cdh 5.10.

13 REPLIES 13

avatar
Super Collaborator
How much RAM did you give to Kudu? The default is 1G which starves it.

avatar
Champion
Please share the HW and SW specs and the results. I am quite interested. As pointed out, both could sway the results as even Impala's defaults are anemic.

Also, I want to point out that Kudu is a filesystem, Impala is an in-memory query engine. Parquet is a file format.

So what you are really comparing is Impala+Kudu v Impala+HDFS. You should be using the same file format for both to make it a direct comparison. Also, I don't view Kudu as the inherently faster option. Yes it is written in C which can be faster than Java and it, I believe, is less of an abstraction. Anyway, my point is that Kudu is great for somethings and HDFS is great for others. It isn't an this or that based on performance, at least in my opinion.

avatar

We'd expect Kudu to be slower than Parquet on a pure read benchmark, but not 10x slower - that may be a configuration problem. We've published results on the Cloudera blog before that demonstrate this: http://blog.cloudera.com/blog/2017/02/performance-comparing-of-different-file-formats-and-storage-en...

 

Parquet is a read-only storage format while Kudu supports row-level updates so they make different trade-offs. I think we have headroom to significantly improve the performance of both table formats in Impala over time.

E.g. in Impala 2.9/CDH5.12 IMPALA-5347 and IMPALA-5304 improve pure Parquet scan performance by 50%+ on some workloads, and I think there are probably similar opportunities for Kudu.

 

avatar
Contributor

@mbigelow, You've brought up a good point that HDFS is going to be strong for some workloads, while Kudu will be better for others.  It's not quite right to characterize Kudu as a file system, however.  Kudu is a distributed, columnar storage engine.  In other words, Kudu provides storage for tables, not files.  So in this case it is fair to compare Impala+Kudu to Impala+HDFS+Parquet.

avatar
Contributor

Thanks all for your reply, here is some detail about the testing.

We are running impalad+kudu on 14 nodes,

nodes info:

cpu model : Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz

cpu cores: 32

mem: 128G

disk: 4T*12, sas

 

impalad and kudu are installed on each node, with 16G MEM for kudu, and 96G MEM for impalad.

parquet files are stored on another hadoop cluster with about 80+ nodes(running hdfs+yarn).

 

We are running tpc-ds queries(https://github.com/cloudera/impala-tpcds-kit) .

With the 18 queries, each query were run with 3 times, (3 times on impala+kudu, 3 times on impala+parquet)and then we caculate the average time. While compare to the average query time of each query,we found that  kudu is slower than parquet. Here is the result of the 18 queries:

 kudu-parquet.png

 

We are planing to setup an olap system, so we compare impala+kudu vs impala+parquet to see which is the good choice.

avatar
Super Collaborator

Make sure you run COMPUTE STATS after loading the data so that Impala knows how to join the Kudu tables.

 

What is the total size of your data set?

 

I am surprised at the difference in your numbers and I think they should be closer if tuned correctly. Regardless, if you don't need to be able to do online inserts and updates, then Kudu won't buy you much over the raw scan speed of an immutable on-disk format like Impala + Parquet on HDFS.

 

 

avatar
Super Collaborator

Can you also share how you partitioned your Kudu table?

avatar
Contributor

1, Make sure you run COMPUTE STATS: yes, we do this after loading data

 

2, What is the total size of your data set?

impala tpc-ds tool create 9 dim tables and 1 fact table,

which dim tables are small(record num from 1k to 4million+ according to the datasize generated),

and the fact table is big, here is the 'data siez-->record num' of fact table:

512g<-->4224587147

256g<-->2112281549

64g<-->528071062

 

3, Can you also share how you partitioned your Kudu table?

for the dim tables, we hash partition it into 2 partitions by their primary (no partition for parquet table),

for the fact table, we range partition it into 60 partitions by its 'data field'(parquet partition into 1800+ partitions),

for those tables create in kudu, their replication factor is 3.

avatar
Super Collaborator
Could you check whether you are under the current scale recommendations for
Kudu?

We are working hard on increasing these limits and will try to do so for
each coming release.

Current scale limits for CDH 5.11 (Kudu 1.3):
https://www.cloudera.com/documentation/kudu/latest/topics/kudu_known_issues.html#concept_cws_n4n_5z