I would suggest doing some benchmarking, but there will be lots of variables that account for this, including any resource pools that may be setup.
You may have some improvements in running multiple queries within the same Spark context as you will have less overhead of starting the driver and seperate executor nodes. Some of Spark's performance improvement come from reusing JVMs instead of spining up new ones. You will need to ensure the same resources are available for each test though. The overhead will become less significant though as processing times of your tasks increase.