01-19-2018 08:00 AM
can somebody give a hint or guideline how to maximize the Kudu scan (read from kudu table) performance from Spark? I tried a simple dataframe read, tried also to create multiple data frames, where each had different filters on one of the column in the primary key columns, and then union the dataframes and write to HDFS but it seems to me that the Tablet server is handling out the data via one scanner, so there are 5 tablet servers, 5 scanners and 5 tasks in 5 execturos.
Is it possible to trigger more scanners via spark?
Solved! Go to Solution.
01-23-2018 11:41 AM
The kudu-spark integration will create one task/executor per Kudu tablet, each with a single scanner. If you want to achieve more parallelism you can add more tablets/partitions to the Kudu table.