Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here. Want to know more about what has changed? Check out the Community News blog.

Idle Spark Shells

SOLVED Go to solution

Idle Spark Shells

Explorer

We have some users who start Spark shells and leave them open indefinitely. Without using dynamic resource allocation to deallocate executors - would it be possible to write something to poll YARN to determine if a Spark shell isn't doing anything, and after X time period of inactivity, kill it?

1 ACCEPTED SOLUTION

Accepted Solutions

Re: Idle Spark Shells

Cloudera Employee

FWIW, there is a safety valve setting in CM for spark-defaults.conf

7 REPLIES 7

Re: Idle Spark Shells

Master Collaborator

Heh, that is a large part of what dynamic allocation was meant for, so you could have a long running process that could only consume resources when it's active. and a shell sitting open is a prime example of that.

 

To some degree you can manage this via resource pools in YARN, and restrict a user, group or perhaps type of usage to a certain set of resources. This would be a pretty crude limit though, just a cap on the problem. Open shells would still keep resources.

 

Timing out shells is tricky because you lose work and state; that's probably pretty surprising.

Really you want dynamic allocation for this.

Re: Idle Spark Shells

Explorer

Thanks for the reply. Being new to CDH, I do have a question. In the general settings of Spark in CM I see there's an option to either turn dynamic allocation off or on. If I wanted to tweak some of the configs for dynamic allocation as listed on the project page (http://spark.apache.org/docs/latest/configuration.html#dynamic-allocation) - would I do so via the Advanced Configuration snippets?

Highlighted

Re: Idle Spark Shells

Master Collaborator
Typically you set this per job on the command line as args to
spark-shell. If a setting is really something to establish as a
default, you can update or point to a new, different
spark-defaults.conf for your jobs. Advanced config snippets are for
services, like the Spark history server, at least to my understanding.
I'm not sure that would apply.

Re: Idle Spark Shells

Explorer

If the problem is users leaving their shells open, I don't think I can trust them to add extra parameters to their CLI arugments to ensure they don't eat up extra resources (from their point of view, why would they care if they're using up my resources?).

 

How about changing the spark-defaults.conf for the Gateway Default Group in CM? Would that accomplish what I'm looking for?

Re: Idle Spark Shells

Master Collaborator
Yes, that sounds right, though I confess I haven't tried that myself.
Others here may have better suggestions.

Re: Idle Spark Shells

Cloudera Employee

FWIW, there is a safety valve setting in CM for spark-defaults.conf

Re: Idle Spark Shells

New Contributor

Can you expand on this? Am pretty new to spark and this is marked as the solution. 

Also, since dynamicAllocation can handle this why would an user not want to enable that instead?