Support Questions

Find answers, ask questions, and share your expertise

Can we avoid Resource Manager to retry failed Application Master on the same NodeManager?

avatar
Guru

I have seen AM being retried on the same node where first attempt failed causing the job to fail. There are situations where there is something wrong with the node (either with space or other issues), so any number of retries there will fail.

Is there any way to see that AM retries always go to a different Node Manager? Is the current policy to always retry on the same Node Manager?

1 ACCEPTED SOLUTION

avatar

There is work ongoing on this: as people note, it's not something the app can do itself.. the RM needs to make the decisions

Look at YARN-4389 , which will allow individual apps to set a threshold for failures on a node before the RM blacklists it.

View solution in original post

7 REPLIES 7

avatar
Master Mentor

@ravi@hortonworks.com More info added

Link

setResourceName

@InterfaceAudience.Public
@InterfaceStability.Stable
public abstract void setResourceName(String resourceName)
Set the resource name (e.g. host/rack) on which the allocation is desired. A special value of * signifies that any resource name (e.g. host/rack) is acceptable.
Parameters:
resourceName - (e.g. host/rack) on which the allocation is desired
or
You can research on

Node Labels

Another doc link

@Public
  @Stable
  public static ResourceRequest newInstance(Priority priority, String hostName,
      Resource capability, int numContainers, boolean relaxLocality,
      String labelExpression) {
    ResourceRequest request = Records.newRecord(ResourceRequest.class);
    request.setPriority(priority);
    request.setResourceName(hostName);
    request.setCapability(capability);
    request.setNumContainers(numContainers);
    request.setRelaxLocality(relaxLocality);
    request.setNodeLabelExpression(labelExpression);
    return request;
  }
It only makes sense when the resource location is ANY or rack and not data local.

avatar
Guru

It has nothing to do with labels. It will be the same issue with Node Labels. Whenever an AM fails for any reason, I see that the retry is happening on the same node. If first AM failed for any node related issue, the second one also will fail for the same issue. What we are looking at is if it is possible with any config change to make sure AM retry does not happen on the same node.

avatar
Master Mentor

@ravi@hortonworks.com then afaik, your best bet is this as pointed out in my previous response.

avatar
Guru

The question is more in the sense of how mapreduce AM can have a policy of not rerunning AM on the same node that failed on the first try. This is not a custom yarn app where we can decide where AM should go. If map reduce AM can't do this now, it might be better if we can drive a support ticket for it for an enhancement since with current approach, problem in single node manager can cause failed map reduce jobs.

avatar
Master Mentor

@ravi@hortonworks.com if you dont want to customize and want it to be automatic as part of Yarn architecture then yes, you can open an enhancement request.

avatar

There is work ongoing on this: as people note, it's not something the app can do itself.. the RM needs to make the decisions

Look at YARN-4389 , which will allow individual apps to set a threshold for failures on a node before the RM blacklists it.

avatar
Guru

Thanks Steve.

In our case, we are looking to set it at RM level, not necessarily even at app/AM level. So, AM fails for any reason, just don't retry AM on the same host, pick something else. Based on error, it might be good option to blacklist at RM level to not send further AMs there.