Support Questions

Find answers, ask questions, and share your expertise

Is it possible to define max memory size for Spark executor and driver in Yarn

avatar
Expert Contributor

Just starting to understand Spark memory management on yarn and got few questions that I thought would be better to ask experts here.

1. Is there a way to restrict max size that users can use for Spark executor and Driver when submitting jobs on Yarn cluster?

2. What the best practice around determining number of executor required for a job? Is there a max limit that users can be restricted to?

3. How RM handles resource allocation if most of the resources are consumed by Spark jobs in a queue? How preemption is handled?

1 ACCEPTED SOLUTION

avatar
Master Guru

1. Is there a way to restrict max size that users can use for Spark executor and Driver when submitting jobs on Yarn cluster?

You can set an upper limit for all task ( yarn max allocation mb or similar in yarn-site.xml ). But there is no way I am aware of to specifically restrcit spark applications or applications in one queue.

2. What the best practice around determining number of executor required for a job?

Its a good question. There was an interesting presentation about that. The conclusion for executor size is:

"It depends but usually 10-40GB and 3-6 cores per executor is a good limit. "

A max number of executors is not that easy it depends on the amount of data you want to analyze and the speed you need. So let's assume you have 4 cores per executor and he can run 8 tasks in each and you want to analyze 100GB of data and you say you want around 128MB or one block per executor so you would need a thousand tasks in total. To run them all at the same time you could go up to 100 executors for max. performance but you can also make it smaller. It would then be slower.

Bottomline its not unlike a mapreduce task. If you want a rule of thumb then the upper limit is data amount / hdfs block size / number of cores per executor x 2. More will not help you much.

http://www.slideshare.net/HadoopSummit/running-spark-in-production-61337353

Is there a max limit that users can be restricted to?

You can use yarn to create a queue for your spark users. There is a yarn parameter user limit which allows you to restrict a single user from having more than a specific amount of a queue. user-limit = 0.25 for example would restrcit a user from taking more than 25% of the queue. Or you could give every user a queue.

3. How RM handles resource allocation if most of the resources are consumed by Spark jobs in a queue? How preemption is handled?

Like with any other task in yarn? Spark is not special. Preemption with Spark will kill executors and that is not great for spark ( although it can survive it for a while. ) I would avoid preemption if I could

View solution in original post

1 REPLY 1

avatar
Master Guru

1. Is there a way to restrict max size that users can use for Spark executor and Driver when submitting jobs on Yarn cluster?

You can set an upper limit for all task ( yarn max allocation mb or similar in yarn-site.xml ). But there is no way I am aware of to specifically restrcit spark applications or applications in one queue.

2. What the best practice around determining number of executor required for a job?

Its a good question. There was an interesting presentation about that. The conclusion for executor size is:

"It depends but usually 10-40GB and 3-6 cores per executor is a good limit. "

A max number of executors is not that easy it depends on the amount of data you want to analyze and the speed you need. So let's assume you have 4 cores per executor and he can run 8 tasks in each and you want to analyze 100GB of data and you say you want around 128MB or one block per executor so you would need a thousand tasks in total. To run them all at the same time you could go up to 100 executors for max. performance but you can also make it smaller. It would then be slower.

Bottomline its not unlike a mapreduce task. If you want a rule of thumb then the upper limit is data amount / hdfs block size / number of cores per executor x 2. More will not help you much.

http://www.slideshare.net/HadoopSummit/running-spark-in-production-61337353

Is there a max limit that users can be restricted to?

You can use yarn to create a queue for your spark users. There is a yarn parameter user limit which allows you to restrict a single user from having more than a specific amount of a queue. user-limit = 0.25 for example would restrcit a user from taking more than 25% of the queue. Or you could give every user a queue.

3. How RM handles resource allocation if most of the resources are consumed by Spark jobs in a queue? How preemption is handled?

Like with any other task in yarn? Spark is not special. Preemption with Spark will kill executors and that is not great for spark ( although it can survive it for a while. ) I would avoid preemption if I could