Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

YARN fails to preempt Spark job

avatar
Explorer

hello,

 

We are running cloudera hadoop 2.5.0-cdh5.3.0, on CentOS 6.5.

 

trying to get job preemption running, first submitting a long running spark job with

spark-submit (spark 1.5.2) to a named queue. then submitting further sqoop jobs to the cluster to another named queue,

 

We are using Scheduling policy DRF, we have 3 queues, weighting 3:1:1, min share preemption timeout(seconds) is 60:300:300.

 

Yarn config is set with

Admin ACL = *

Fair Scheduler Preemption = true

 

Jobs are submitted and accepted by the RM; however no preemption is occurring - I am expecting to see the long running spark job interrupted and some resources diverted to the later submitted sqoop jobs.  instead all i am seeing is jobs accepted and queued up.

 

can anyone confirm this is the correct understanding and if so is there config i am missing?

 

thanks

 

 

1 ACCEPTED SOLUTION

avatar
Super Collaborator

CDH 5.3 does not come with Spark 1.5. You are running an unsupported cluster. Please be aware of that.

 

Weight has nothing to do with the pre-emption. It is a common misunderstanding. The weight is just to decide which queue gets a higher priority during the scheduling cycle. So if I have queues with the weights 3:1:1 then from every 10 schedule attempts 6 will go to the queue with weight 3 and 2 attempts will be for each queue with weight 1, totalling 10 attempts.

 

Minimum share preemption works only if you have the minimum and maximum shares for a queue set. Make sure you have that. The fair share of a queue is calculated based on the demand in the queue (i.e. the applications that are running in the queue). You thus might not be hitting the fair share preemption threshold....

 

Wilfred

 

View solution in original post

3 REPLIES 3

avatar
Super Collaborator

CDH 5.3 does not come with Spark 1.5. You are running an unsupported cluster. Please be aware of that.

 

Weight has nothing to do with the pre-emption. It is a common misunderstanding. The weight is just to decide which queue gets a higher priority during the scheduling cycle. So if I have queues with the weights 3:1:1 then from every 10 schedule attempts 6 will go to the queue with weight 3 and 2 attempts will be for each queue with weight 1, totalling 10 attempts.

 

Minimum share preemption works only if you have the minimum and maximum shares for a queue set. Make sure you have that. The fair share of a queue is calculated based on the demand in the queue (i.e. the applications that are running in the queue). You thus might not be hitting the fair share preemption threshold....

 

Wilfred

 

avatar
Explorer

many thanks Wilfred that's fixed it

 

and yes we're planning an upgrade to CDH5.5

avatar
New Contributor

Hi Wilfred,

 

Thanks for your response.

 

I am using a same setup but with standard Spark 1.3. But I am seeing that even though I have set the minimum and maximum shares for a queue (both min and max memory set at 80% of available memory), if there is already a spark job running in a different queue taking 40% memory, it is never preempted! The job in the queue with 80% memeory asking for 70% of memory waits until job in the other queue is finished. It is weird that in the same setup the preemption works for Hadoop mapreduce jobs but not for spark jobs any idea?

 

Thanks,

Vinay