Member since
01-25-2017
25
Posts
4
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4411 | 03-27-2017 07:57 AM |
03-27-2017
07:57 AM
After setting below 2 parameters on custom yarn-site.xml, things started working.
yarn.resourcemanager.monitor.capacity.preemption.max_ignored_over_capacity yarn.resourcemanager.monitor.capacity.preemption.natural_termination_factor
... View more
03-21-2017
10:36 PM
Thanks @Michael Young for your answer. But, I don't think that's how it works. As per my understanding, the case you talked about is when preemption is disabled - ie, new tasks have to wait until the existing ones are finished. New ones cannot start if the available resources is less than the minimum requirement. I think, the whole point of preemption is to avoid this scenario by forcefully killing containers held by existing jobs from over utilized queues if they're not willing to release resources in 'x' amount of time. Please see here , STEP #3 reads "such containers will be forcefully killed by the ResourceManager to ensure that SLAs of applications in under-satisfied queues are met". To answer your other question, I have 4 queues, Q1 to Q4, each has 25% min capacity and 100% max capacity. Q2 is divided into Q21 and Q22 with 50%(min) each. All of them uses FIFO.
... View more
03-21-2017
12:41 PM
1 Kudo
Hi there,
I have enabled preemption for YARN as per : https://hortonworks.com/blog/better-slas-via-resource-preemption-in-yarns-capacityscheduler/
I observed that if the queues are 100% occupied by Hive (TEZ with container reuse enabled) or Spark jobs already and if a new job is submitted to any queue, it will not start until any of the existing tasks finish. At the same time if I try to launch hive cli, it will also hang forever until some tasks are finished and resources are deallocated.
If TEZ container reuse is disabled, new jobs will start getting resources - this is not because of preemption, but each container will last only for a few secs and the new containers will go to new jobs. Spark is anyway not touched - it will not release any resources.
Anyone has any hint as to why preemption is not happening ? Also, how to preempt spark jobs ?
Values are as follows -
yarn.resourcemanager.scheduler.monitor.enable = true
yarn.resourcemanager.scheduler.monitor.policies = org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy
yarn.resourcemanager.monitor.capacity.preemption.monitoring_interval = 3000
yarn.resourcemanager.monitor.capacity.preemption.max_wait_before_kill = 15000
yarn.resourcemanager.monitor.capacity.preemption.total_preemption_per_round = 0.1
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache YARN
02-27-2017
01:42 PM
This fixed my problem. I am on HDP 2.5.0 🙂
... View more
01-27-2017
08:59 AM
1 Kudo
@gnovak I completely figured out the issue - not sure if I can call it an issue ! It was the "user-limit-factor". In my case, each queue is used by only one user. My assumption was that, if min capacity of a sub leaf (Q41) is 25% and it can grow up to 100% of its parent queue - Q4, then the max user-limit-factor value Q41 can have would be 4 (4*25=100%). But this is not true ! It can grow beyond that - until the absolute max configured capacity ! So the math is : max (user-limit-factor) = absolute max configured capacity / absolute configured capacity. Absolute values we can find from the Scheduler part in resource manager UI. Once I adjusted the user-limit-factor to take benefit of the whole capacity by a single user, problem solved ! Thanks for your spark though !
... View more
01-26-2017
07:11 PM
@gnovak Perfect illustration, this kind of doc is not available on internet, wish Hortonworks pin it somewhere 🙂 In your case, was the user limit factor set to 1 ? I also suspect the apps as to why they were not requesting more capacity. In my case, the workload was different. Q1 and Q2 had 1 app each with less number of containers and large amount of resources. Meanwhile, Q41 had one app with more number of containers but with minimum resources ( containers with min configured memory and vcores in yarn ). Anyway, I'll investigate more by pushing the same load to all queues simultaneously and see. Thank you for your time, much appreciated 🙂 !
... View more
01-25-2017
08:11 PM
http://hortonworks.com/blog/better-slas-via-resource-preemption-in-yarns-capacityscheduler/ This doc says - "preemption works in conjunction with the scheduling flow to make sure that resources freed up at any level in the hierarchy are given back to the right queues in the right level".
... View more
01-25-2017
07:48 PM
This could partially explain the reason, thanks for the spark. But, I would still expect, in a FIFO queue, resources are given in a round robin manner according to the demand. Then also, there should be more civilized/balanced distribution of resources across same level queues and there by the sub leafs getting a fair portion. confusing ! 😞
... View more
01-25-2017
05:37 PM
Thanks @Jasper for your reply. But pre-emption is enabled. I can confirm that because YARN jobs spawned under those queues say "Pre-emption enabled" in resource manager. "I don't get why Q41 is only getting 10% and not 20%." ^ Actually I was talking about the absolute capacity - so it's calculated as 25%40 = 10% Absolute. So the minimum is satisfied. Excess resources are then moved to the queues one level above (Q1, Q2 & Q3). So, it seems to me like, Queues at a certain level have got more priority than their underlying subleafs. Meaning, if the minimum capacity is satisfied for subleafs, then resource manager puts its parent in a wait list and allocates more resources to other queues of same level as the parent. This is what I observed, it doesn't make sense though !
... View more
01-25-2017
02:22 PM
Hi, I'm stuck with a problem and would be really great if someone could help me ! I'm running an HDP 2.5.0.0 cluster. Capacity scheduler is the scheduler used. Let's say I have 4 queues - Q1, Q2, Q3 and Q4 defined under root. Q1,Q2 and Q3 are leaf queues and have minimum and maximum capacities 20% and 40% respectively (queues are similar). Q4 is a parent queue (minimum cap - 40%, max - 100%) and has 4 leaf queues under it - let's say Q41, Q42, Q43 and Q44 (minimum 25, maximum 100 for all 4 sub queues) . All queues have minimum user limit set to 100% and user limit factor set to 1. Issue : When users submit jobs to Q1,Q2 and Q41 and if other queues are empty, I would expect Q1 and Q2 should be at 20% + absolute capacity and Q4 should be 40% +, roughly 25 (Q1), 25 (Q2) and 50 (Q41). But this is not happening. Q1 and Q2 always stay at 40% and Q41 or Q4 is getting only 10% absolute capacity. Any idea how it's happening ? Thank.
... View more
Labels:
- Labels:
-
Apache YARN
- « Previous
-
- 1
- 2
- Next »