Support Questions

Find answers, ask questions, and share your expertise
Check out our newest addition to the community, the Cloudera Data Analytics (CDA) group hub.

org.apache.ambari.server.AmbariException: sudo

WARN [Server Action Executor Worker 3355] ServerActionExecutor:497 - Task #3355 failed to complete execution due to thrown exception: org.apache.ambari.server.AmbariException:sudo:sa terminal is needed to exc sudo

org.apache.ambari.server.AmbariException: sudo:抱歉,您必须拥有一个终端来执行 sudo


Super Mentor

@Elvis Zhang

Are you running ambari as non "root" user?

If yes then you should refer to:

Also please check your "visudo" (/etc/sudoers) file and check the sudo permissions Example:

# sudo visudo

## Allow root to run any commands anywhere
root  ALL=(ALL)  ALL


Sudo Defaults - Ambari Server :

If sudo is not properly set up, the following error will be seen when the "Configure
            Ambari Identity" stage fails:
sudo: no tty present and no askpass program specified

Server action failed



yes ,i change user to ambari , the error gone. but another quesion occured hadoop's datanode and NameNode can't be start .

resource_management.core.exceptions.ExecutionFailed: Execution of ' su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/current/hadoop-client/sbin/ --config 

Super Mentor

@Elvis Zhang

As you mentioned that now the "hadoop's datanode and NameNode can't be start ."

- Does it means that you are getting any error/exception int he DataNode/NameNode log?

- Or in ambari server. log ? Can you please share the complete log along with the respective stackTrace.

- Apart from NN and DN are you able to start other components (like zookeeper...etc)?

- Apart form ambari-server, Have you setup the "sudoer" permission properly for every ambari-agent as well as mentioned in the :


1.not all the service can't start


2.I have setup the "sudore" permission properly for every ambari-agent as well as mentioned in "ambari_agent_for_non-root.html"

3.zookeeper can start

4. Start DN error logs bellow

stderr: /var/lib/ambari-agent/data/errors-3731.txt

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HDFS/", line 174, in <module>
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/", line 280, in execute
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/", line 720, in restart
    self.start(env, upgrade_type=upgrade_type)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/", line 61, in start
  File "/usr/lib/python2.6/site-packages/ambari_commons/", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/", line 68, in datanode
  File "/var/lib/ambari-agent/cache/common-services/HDFS/", line 269, in service
    Execute(daemon_cmd, not_if=process_id_exists_command, environment=hadoop_env_exports)
  File "/usr/lib/python2.6/site-packages/resource_management/core/", line 155, in __init__
  File "/usr/lib/python2.6/site-packages/resource_management/core/", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/", line 124, in run_action
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/", line 273, in action_run
    tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/", line 70, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/", line 92, in checked_call
    tries=tries, try_sleep=try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/", line 140, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/", line 293, in _call
    raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of ' su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/current/hadoop-client/sbin/ --config /usr/hdp/current/hadoop-client/conf start datanode'' returned 1. starting datanode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-datanode-hadoop-namenode-1.out

stdout: /var/lib/ambari-agent/data/output-3731.txt

2017-03-28 15:59:11,350 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version
2017-03-28 15:59:11,352 - Checking if need to create versioned

Super Mentor

@Elvis Zhang

This error seems to be occuring on the DataNode side because the command to start it seems to be triggered properly from ambari side but getting error, so looking at the following file might help



Super Mentor

@Elvis Zhang Also it will be good to see if you can start the DataNode without any issue manually using the following command:

# su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/current/hadoop-client/sbin/ --config /usr/hdp/current/hadoop-client/conf start datanode'


Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.