Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Replace hardware on NodeManager server / yarn.nodemanager.recovery.enabled = false

Solved Go to solution
Highlighted

Replace hardware on NodeManager server / yarn.nodemanager.recovery.enabled = false

Unlike replacing disk for DataNode, I can't find any information for replacing disk for NodeManager.

As HDP's yarn.nodemanager.recovery.enabled = true, my guess is if I stopped a NodeManager while some containers were running, jobs related to these containers would wait until this NodeManager was started, which may not be convenient as it would affect to SLA.

If this is true, is there any issue of setting yarn.nodemanager.recovery.enabled = false permanently, so that when NodeManager is stopped, (my expectation is) the container will be created in another NodeManager?

1 ACCEPTED SOLUTION

Accepted Solutions
Highlighted

Re: Replace hardware on NodeManager server / yarn.nodemanager.recovery.enabled = false

The Nodemanager restart does not recreate the containers. It reattaches to existing containers that are still running. I.e. when a nodemanager is restarted, the server may not have been rebooted but just the nodemanager process. Instead of shooting down all containers and starting fresh he can reattach to the still running containers and therefore has less impact on running applications.

Especially good for long running applications like Spark Streaming and Application Masters. So SLAs shouldn't be affected. If for example the whole node goes down. MapReduce and Tez will still see dead containers and application masters and recreate as necessary. yarn recovery has no impact on that.

http://hortonworks.com/blog/resilience-of-yarn-applications-across-nodemanager-restarts/

View solution in original post

4 REPLIES 4
Highlighted

Re: Replace hardware on NodeManager server / yarn.nodemanager.recovery.enabled = false

See this https://issues.apache.org/jira/browse/YARN-1336 @Hajime

I don't recommend setting this to false. This setting helps significantly in the event that NodeManager fails for various reasons.

Make sure that yarn.nodemanager.recover.dir points to non temp directory

Highlighted

Re: Replace hardware on NodeManager server / yarn.nodemanager.recovery.enabled = false

The Nodemanager restart does not recreate the containers. It reattaches to existing containers that are still running. I.e. when a nodemanager is restarted, the server may not have been rebooted but just the nodemanager process. Instead of shooting down all containers and starting fresh he can reattach to the still running containers and therefore has less impact on running applications.

Especially good for long running applications like Spark Streaming and Application Masters. So SLAs shouldn't be affected. If for example the whole node goes down. MapReduce and Tez will still see dead containers and application masters and recreate as necessary. yarn recovery has no impact on that.

http://hortonworks.com/blog/resilience-of-yarn-applications-across-nodemanager-restarts/

View solution in original post

Highlighted

Re: Replace hardware on NodeManager server / yarn.nodemanager.recovery.enabled = false

So.. for hardware replacement, don't need to worry about recovery? Just shutdown OS?

Highlighted

Re: Replace hardware on NodeManager server / yarn.nodemanager.recovery.enabled = false

If you shutdown the OS all tasks running on that node will be stopped too so you don't need to worry about recovery. You might kill the running application masters on that node though. There is no graceful shutdown of a nodemanager that waits for running applications to finish as of yet ( AFAIK if someone knows better let me know ). Yarn depends on applications to handle task or AM failures gracefully.

https://issues.apache.org/jira/browse/YARN-914

Don't have an account?
Coming from Hortonworks? Activate your account here