Reply
Expert Contributor
Posts: 193
Registered: ‎07-01-2015
Accepted Solution

Kudu scan maximize throughput via Spark

Hi,

 can somebody give a hint or guideline how to maximize the Kudu scan (read from kudu table) performance from Spark? I tried a simple dataframe read, tried also to create multiple data frames, where each had different filters on one of the column in the primary key columns, and then union the dataframes and write to HDFS but it seems to me that the Tablet server is handling out the data via one scanner, so there are 5 tablet servers, 5 scanners and 5 tasks in 5 execturos.

 

Is it possible to trigger more scanners via spark?

 

Thanks

 

Cloudera Employee
Posts: 19
Registered: ‎09-28-2015

Re: Kudu scan maximize throughput via Spark

Hi Tomas,

 

The kudu-spark integration will create  one task/executor per Kudu tablet, each with a single scanner.  If you want to achieve more parallelism you can add more tablets/partitions to the Kudu table.

New Contributor
Posts: 1
Registered: ‎06-19-2018

Re: Kudu scan maximize throughput via Spark

Hi,

 

Im trying to access kudu through impala and spark and it seems scan through Impala is 5-6 times faster than spark. Through impala its taking 2.5 mins to scan the kudu table where as its taking 18 mins to scan the kudu table through spark.

I would like to learn the reason for this.

Highlighted
Expert Contributor
Posts: 193
Registered: ‎07-01-2015

Re: Kudu scan maximize throughput via Spark

You did not mentioned the version of CDH. But I think the problem is that spark launches many executors to read, and those executors are not co-located with the Kudu tablet servers.

I dont know if you are just reading/filtering the data, or reading and writing into parquet - it depends how the spark job is executed.

What I also noticed, that running multiple spark jobs agains the same table (with different partitions) did not help either.

Announcements