I am dealing with a weird situation , where I have small tables and big tables to process using spark and it must be a single spark job.
To achieve best performance targets, I need to set a property called
spark.sql.shuffle.partitions = 12 for small tables and
spark.sql.shuffle.partitions = 500 for bigger tables
I want to know how can I change these properties dynamically in spark ?
Can I have multiple configuration files and call it within the program ?