Member since
07-08-2017
3
Posts
0
Kudos Received
0
Solutions
07-12-2017
02:08 AM
1 Kudo
you can look into turning on `spark.dynamicAllocation.enabled` setting, this setting will release any un-unsed executors back to the cluster and request when they are needed link https://spark.apache.org/docs/latest/configuration.html#dynamic-allocation or after you have completed your analysis, you can restart the spark interpreter in zeppelin, due to lazy evaluation zeppelin will only start the spark context when you need it.
... View more