Support Questions

Find answers, ask questions, and share your expertise

Kudu scan maximize throughput via Spark

avatar

Hi,

 can somebody give a hint or guideline how to maximize the Kudu scan (read from kudu table) performance from Spark? I tried a simple dataframe read, tried also to create multiple data frames, where each had different filters on one of the column in the primary key columns, and then union the dataframes and write to HDFS but it seems to me that the Tablet server is handling out the data via one scanner, so there are 5 tablet servers, 5 scanners and 5 tasks in 5 execturos.

 

Is it possible to trigger more scanners via spark?

 

Thanks

 

1 ACCEPTED SOLUTION

avatar
Contributor

Hi Tomas,

 

The kudu-spark integration will create  one task/executor per Kudu tablet, each with a single scanner.  If you want to achieve more parallelism you can add more tablets/partitions to the Kudu table.

View solution in original post

3 REPLIES 3

avatar
Contributor

Hi Tomas,

 

The kudu-spark integration will create  one task/executor per Kudu tablet, each with a single scanner.  If you want to achieve more parallelism you can add more tablets/partitions to the Kudu table.

avatar
New Contributor

Hi,

 

Im trying to access kudu through impala and spark and it seems scan through Impala is 5-6 times faster than spark. Through impala its taking 2.5 mins to scan the kudu table where as its taking 18 mins to scan the kudu table through spark.

I would like to learn the reason for this.

avatar

You did not mentioned the version of CDH. But I think the problem is that spark launches many executors to read, and those executors are not co-located with the Kudu tablet servers.

I dont know if you are just reading/filtering the data, or reading and writing into parquet - it depends how the spark job is executed.

What I also noticed, that running multiple spark jobs agains the same table (with different partitions) did not help either.