Created 03-02-2016 08:54 AM
Unlike replacing disk for DataNode, I can't find any information for replacing disk for NodeManager.
As HDP's yarn.nodemanager.recovery.enabled = true, my guess is if I stopped a NodeManager while some containers were running, jobs related to these containers would wait until this NodeManager was started, which may not be convenient as it would affect to SLA.
If this is true, is there any issue of setting yarn.nodemanager.recovery.enabled = false permanently, so that when NodeManager is stopped, (my expectation is) the container will be created in another NodeManager?
Created 03-02-2016 09:25 AM
The Nodemanager restart does not recreate the containers. It reattaches to existing containers that are still running. I.e. when a nodemanager is restarted, the server may not have been rebooted but just the nodemanager process. Instead of shooting down all containers and starting fresh he can reattach to the still running containers and therefore has less impact on running applications.
Especially good for long running applications like Spark Streaming and Application Masters. So SLAs shouldn't be affected. If for example the whole node goes down. MapReduce and Tez will still see dead containers and application masters and recreate as necessary. yarn recovery has no impact on that.
http://hortonworks.com/blog/resilience-of-yarn-applications-across-nodemanager-restarts/
Created 03-02-2016 09:21 AM
See this https://issues.apache.org/jira/browse/YARN-1336 @Hajime
I don't recommend setting this to false. This setting helps significantly in the event that NodeManager fails for various reasons.
Make sure that yarn.nodemanager.recover.dir points to non temp directory
Created 03-02-2016 09:25 AM
The Nodemanager restart does not recreate the containers. It reattaches to existing containers that are still running. I.e. when a nodemanager is restarted, the server may not have been rebooted but just the nodemanager process. Instead of shooting down all containers and starting fresh he can reattach to the still running containers and therefore has less impact on running applications.
Especially good for long running applications like Spark Streaming and Application Masters. So SLAs shouldn't be affected. If for example the whole node goes down. MapReduce and Tez will still see dead containers and application masters and recreate as necessary. yarn recovery has no impact on that.
http://hortonworks.com/blog/resilience-of-yarn-applications-across-nodemanager-restarts/
Created 03-02-2016 09:29 AM
So.. for hardware replacement, don't need to worry about recovery? Just shutdown OS?
Created 03-02-2016 09:37 AM
If you shutdown the OS all tasks running on that node will be stopped too so you don't need to worry about recovery. You might kill the running application masters on that node though. There is no graceful shutdown of a nodemanager that waits for running applications to finish as of yet ( AFAIK if someone knows better let me know ). Yarn depends on applications to handle task or AM failures gracefully.