Created on 05-16-2016 01:26 PM - edited 09-16-2022 03:19 AM
We have some users who start Spark shells and leave them open indefinitely. Without using dynamic resource allocation to deallocate executors - would it be possible to write something to poll YARN to determine if a Spark shell isn't doing anything, and after X time period of inactivity, kill it?
Created 05-18-2016 01:09 PM
FWIW, there is a safety valve setting in CM for spark-defaults.conf
Created 05-16-2016 03:11 PM
Heh, that is a large part of what dynamic allocation was meant for, so you could have a long running process that could only consume resources when it's active. and a shell sitting open is a prime example of that.
To some degree you can manage this via resource pools in YARN, and restrict a user, group or perhaps type of usage to a certain set of resources. This would be a pretty crude limit though, just a cap on the problem. Open shells would still keep resources.
Timing out shells is tricky because you lose work and state; that's probably pretty surprising.
Really you want dynamic allocation for this.
Created 05-17-2016 12:59 PM
Thanks for the reply. Being new to CDH, I do have a question. In the general settings of Spark in CM I see there's an option to either turn dynamic allocation off or on. If I wanted to tweak some of the configs for dynamic allocation as listed on the project page (http://spark.apache.org/docs/latest/configuration.html#dynamic-allocation) - would I do so via the Advanced Configuration snippets?
Created 05-17-2016 01:08 PM
Created 05-17-2016 01:34 PM
If the problem is users leaving their shells open, I don't think I can trust them to add extra parameters to their CLI arugments to ensure they don't eat up extra resources (from their point of view, why would they care if they're using up my resources?).
How about changing the spark-defaults.conf for the Gateway Default Group in CM? Would that accomplish what I'm looking for?
Created 05-17-2016 01:41 PM
Created 05-18-2016 01:09 PM
FWIW, there is a safety valve setting in CM for spark-defaults.conf
Created 04-24-2018 11:53 AM
Can you expand on this? Am pretty new to spark and this is marked as the solution.
Also, since dynamicAllocation can handle this why would an user not want to enable that instead?