Support Questions

Find answers, ask questions, and share your expertise

Yarn NodeManager fails to start


We just enabled FreeIPA integration in our Hortonworks cluster (HDP 2.5.3).
We understand that part of the kerberization of Yarn, Ambari/yarn will try and delete the yarn.nodemanager.local-dirs and yarn.nodemanager.log-dirs. The directories defined for these configs in our environment are actual mount points, so as expected Yarn throws this error message "OSError: [Errno 16] Device or resource busy: '/hadoop/yarn/local/01".

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/YARN/", line 161, in <module>
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/", line 280, in execute
  File "/var/lib/ambari-agent/cache/common-services/YARN/", line 51, in start
    self.configure(env) # FOR SECURITY
  File "/var/lib/ambari-agent/cache/common-services/YARN/", line 57, in configure
  File "/usr/lib/python2.6/site-packages/ambari_commons/", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/YARN/", line 168, in yarn
  File "/usr/lib/python2.6/site-packages/resource_management/core/", line 114, in __new__
    cls(names_list.pop(0), env, provider, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/", line 155, in __init__
  File "/usr/lib/python2.6/site-packages/resource_management/core/", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/", line 124, in run_action
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/", line 208, in action_delete
  File "/usr/lib/python2.6/site-packages/resource_management/core/", line 102, in rmtree
  File "/usr/lib64/python2.7/", line 256, in rmtree
    onerror(os.rmdir, path, sys.exc_info())
  File "/usr/lib64/python2.7/", line 254, in rmtree
OSError: [Errno 16] Device or resource busy: '/hadoop/yarn/local/01'
2017-10-25 18:19:36,019 - checked_call returned (0, '')
2017-10-25 18:19:36,019 - Ensuring that hadoop has the correct symlink structure
2017-10-25 18:19:36,019 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-10-25 18:19:36,025 - Directory['/hadoop/yarn/local/01'] {'action': ['delete']}
2017-10-25 18:19:36,026 - Removing directory Directory['/hadoop/yarn/local/01'] and all its content

Command failed after 1 tries

Any idea how do we get around this? Is there some script/flag we can modify for it to avoid attempting to delete these directories?

Just an FYI, we can start the NM service on our DN manually.

Any help would be appreciated.




Super Mentor


Yes we will see the following kind of Warning message while enabling the kerberos from Ambari UI
"YARN log and local dir will be deleted and ResourceManager state will be formatted as part of Enabling/Disabling Kerberos."

This is implemented as part of "" : If YARN is the installed service then as part of the first step of kerberos wizard and also while disabling Kerberos, User should be informed that YARN log and local dir will be deleted and RM state will be formatted as part of Enabling/Disabling Kerberos. This helps user to take a backup of these dirs if desired at the beginning of the wizard.

This will happen If the "local-dirs" are mounted filesystems then it will always show tis warning.

So you should "unmount" the dir, then "Enabled kerberos" and then "remounted" it back.


Yup. We got it resolved earlier this evening and that's what we did. Is this fact mentioned somewhere in the yarn installation/configuration portion? It's not like people setup kerberos right when they setup all the components before they are Kerberized, right?

Anyhew, thanks for the response.

New Contributor

I had Faced similar issue.

We tried to change the Node Manager local directory to different mount point and restarted the services it went up and running. After this revert the changes to old directory.

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.