Created on 06-08-2018 09:25 AM - edited 08-17-2019 08:02 PM
Hi all,
I'm performing a write operation to a postgres database in spark. The dataframe has 44k rows and is in 4 partitions. But the spark job takes 20mins+ to complete. Looking at the logs (attached) I see the map stage is the bottleneck where over 600+ tasks are created. Does anyone have any insight into why this might be and how I could resolve the performance issue? I've also included a screenshot of my cluster metrics.
Thanks,
Mike
Created 06-08-2018 03:59 PM
I think I've found the issue, the hive table I query to create the data frame has the same number of underlying HDFS blocks. Merging these together has improved performance, although it still takes 5mins to complete.
Created 06-10-2018 03:33 AM
Try a map over partition and have each partition write several hundred/thousands of rows to pg.
Created 06-10-2018 04:32 PM
Are you referring to mappartition() (http://apachesparkbook.blogspot.com/2015/11/mappartition-example.html) ?...Could you provide an example of how i might apply this?
thx
mike
Created 06-11-2018 05:38 AM
yes that is it. Basically inside the iterator you would create a large insert statement.
INSERT INTO films (code, title, did, date_prod, kind) VALUES ('B6717', 'Tampopo', 110, '1985-02-10', 'Comedy'), ('HG120', 'The Dinner Game', 140, DEFAULT, 'Comedy');
Your column names can come from the dataframe and the values are from the dataframe it self. therefore nothing is hard coded and you can reuse this code for virtually any database which uses ansi sql inserts.