- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Kudu scan maximize throughput via Spark
- Labels:
-
Apache Kudu
-
Apache Spark
Created on ‎01-19-2018 08:00 AM - edited ‎09-16-2022 05:45 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
can somebody give a hint or guideline how to maximize the Kudu scan (read from kudu table) performance from Spark? I tried a simple dataframe read, tried also to create multiple data frames, where each had different filters on one of the column in the primary key columns, and then union the dataframes and write to HDFS but it seems to me that the Tablet server is handling out the data via one scanner, so there are 5 tablet servers, 5 scanners and 5 tasks in 5 execturos.
Is it possible to trigger more scanners via spark?
Thanks
Created ‎01-23-2018 11:41 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Tomas,
The kudu-spark integration will create one task/executor per Kudu tablet, each with a single scanner. If you want to achieve more parallelism you can add more tablets/partitions to the Kudu table.
Created ‎01-23-2018 11:41 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Tomas,
The kudu-spark integration will create one task/executor per Kudu tablet, each with a single scanner. If you want to achieve more parallelism you can add more tablets/partitions to the Kudu table.
Created ‎06-19-2018 12:51 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Im trying to access kudu through impala and spark and it seems scan through Impala is 5-6 times faster than spark. Through impala its taking 2.5 mins to scan the kudu table where as its taking 18 mins to scan the kudu table through spark.
I would like to learn the reason for this.
Created ‎06-19-2018 02:00 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You did not mentioned the version of CDH. But I think the problem is that spark launches many executors to read, and those executors are not co-located with the Kudu tablet servers.
I dont know if you are just reading/filtering the data, or reading and writing into parquet - it depends how the spark job is executed.
What I also noticed, that running multiple spark jobs agains the same table (with different partitions) did not help either.
