- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Idle Spark Shells
- Labels:
-
Apache Spark
-
Apache YARN
Created on 05-16-2016 01:26 PM - edited 09-16-2022 03:19 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We have some users who start Spark shells and leave them open indefinitely. Without using dynamic resource allocation to deallocate executors - would it be possible to write something to poll YARN to determine if a Spark shell isn't doing anything, and after X time period of inactivity, kill it?
Created 05-18-2016 01:09 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
FWIW, there is a safety valve setting in CM for spark-defaults.conf
Created 05-16-2016 03:11 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Heh, that is a large part of what dynamic allocation was meant for, so you could have a long running process that could only consume resources when it's active. and a shell sitting open is a prime example of that.
To some degree you can manage this via resource pools in YARN, and restrict a user, group or perhaps type of usage to a certain set of resources. This would be a pretty crude limit though, just a cap on the problem. Open shells would still keep resources.
Timing out shells is tricky because you lose work and state; that's probably pretty surprising.
Really you want dynamic allocation for this.
Created 05-17-2016 12:59 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for the reply. Being new to CDH, I do have a question. In the general settings of Spark in CM I see there's an option to either turn dynamic allocation off or on. If I wanted to tweak some of the configs for dynamic allocation as listed on the project page (http://spark.apache.org/docs/latest/configuration.html#dynamic-allocation) - would I do so via the Advanced Configuration snippets?
Created 05-17-2016 01:08 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
spark-shell. If a setting is really something to establish as a
default, you can update or point to a new, different
spark-defaults.conf for your jobs. Advanced config snippets are for
services, like the Spark history server, at least to my understanding.
I'm not sure that would apply.
Created 05-17-2016 01:34 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If the problem is users leaving their shells open, I don't think I can trust them to add extra parameters to their CLI arugments to ensure they don't eat up extra resources (from their point of view, why would they care if they're using up my resources?).
How about changing the spark-defaults.conf for the Gateway Default Group in CM? Would that accomplish what I'm looking for?
Created 05-17-2016 01:41 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Others here may have better suggestions.
Created 05-18-2016 01:09 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
FWIW, there is a safety valve setting in CM for spark-defaults.conf
Created 04-24-2018 11:53 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Can you expand on this? Am pretty new to spark and this is marked as the solution.
Also, since dynamicAllocation can handle this why would an user not want to enable that instead?
