Reply
New Contributor
Posts: 8
Registered: ‎08-26-2016

How to set yarn.resourcemanager preemption.max_wait_before_kill ?

[ Edited ]

Hello,

Is it possible to set this parameter from Cloudera Express 5.14.1 UI?

yarn.resourcemanager.monitor.capacity.preemption.max_wait_before_kill

 

I highly suspect Yarn to kill some spark executor before they can properly end.

 

regards

Julien

Highlighted
Posts: 1,885
Kudos: 425
Solutions: 300
Registered: ‎07-31-2013

Re: How to set yarn.resourcemanager preemption.max_wait_before_kill ?

The property you refer to is for the Capacity Scheduler's preemption settings. Are you using the Capacity Scheduler in your cluster?

For parameters not available in a service's UI, you can typically add it to the safety-valve of the relevant config file. Your quoted configuration is for the ResourceManager, so add it to the following field:

YARN - Configuration - 'ResourceManager Advanced Configuration Snippet (Safety Valve) for yarn-site.xml' as property name and value in the UI.

Note: CDH and CM primarily supports the Fair Scheduler which has fair-share based preemption controls that can be fine-tuned extensively: http://blog.cloudera.com/blog/2018/06/yarn-fairscheduler-preemption-deep-dive/
Explorer
Posts: 12
Registered: ‎06-27-2017

Re: How to set yarn.resourcemanager preemption.max_wait_before_kill ?

Based on the link provided

Per link https://blog.cloudera.com/blog/2018/06/yarn-fairscheduler-preemption-deep-dive/

 

  • Resources are preempted only if the resulting free space matches a starving application’s request. This ensures none of the preempted containers go unused.

 

Please help me understand this.

 

If a pending container needs 10G of RAM, but each containers in a queue over its fair share is less that 10G of RAM, none will be preempted. Is this a correct statement based on the above bullet

Cloudera Employee
Posts: 61
Registered: ‎04-24-2017

Re: How to set yarn.resourcemanager preemption.max_wait_before_kill ?

Hi,

 

 

Mulitple containers get preempted to fulfill the need of one big container provided those small containers are running in the queue over fair share and other conditions make it eligile for preepmption as mentioned in the link.

So if there are two 5gb containers from that queue on a node that can be preempted, then that will be preempted and assigned  to this 10GB ram container on other starved queue.

 

Regards
Bimal

Announcements