Community Articles
Find and share helpful community-sourced technical articles
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.
Labels (1)
Super Mentor

Many times we find that the operations (like start/stop/restart ...etc) are failing from ambari UI. In such cases if we want to troubleshoot what Ambari UI did to perform that operation or How the commands were executed. Then we can manually execute those same operations from the individual host with the help of "/var/lib/ambari-agent/data/command-xxx.json" file.

- When we perform any operation from ambari UI like (Starting / Stopping Datanode) then we will notice that ambari shows the operation progress in the UI. There we can see basically the following two files.


stderr:  /var/lib/ambari-agent/data/errors-952.txt
stdout:  /var/lib/ambari-agent/data/output-952.txt


- Apart from the above files there is one more important file which ambari agent uses to execute the instructions/commands that are sent by the AmbariServer. We can find that specific file in the ambari-agent's "/var/lib/ambari-agent/data/command-xxx.json" file.


- Here the "command-xxx.json" file has the command ID (xxx) same as the "errors-xxx.txt" & "output-xxx.txt" (as command-952.json, errors-952.txt, output-952.txt)

- The "command-xxx.json" file contains lots of information's in it specially the "localComponents", "configuration_attributes", "configurationTags" and the command type. In this file we can find the data snippet something like following:

    "public_hostname": "", 
    "commandId": "53-0", 
    "hostname": "", 
    "kerberosCommandParams": [], 
    "serviceName": "HDFS", 
    "role": "DATANODE", 
    "forceRefreshConfigTagsBeforeExecution": false, 
    "requestId": 53, 
    "agentConfigParams": {
        "agent": {
            "parallel_execution": 0
    "clusterName": "ClusterDemo", 
    "commandType": "EXECUTION_COMMAND", 
    "taskId": 952, 
    "roleParams": {
        "component_category": "SLAVE"



How to execute the same command from the host ("") where the operation was actually performed?



======= Login to the host in which the command was executed. Here it is "" which we can see in the ambari UI operations history. While stopping DataNode.



======= As in the operation we were performing DataNode Start operation hence we will be executing the "" script. As following:

[root@c6402 ambari-agent]# PATH=$PATH:/var/lib/ambari-agent/

[root@c6402 ambari-agent]# python2.6 /var/lib/ambari-agent/cache/common-services/HDFS/ START /var/lib/ambari-agent/data/command-952.json  /var/lib/ambari-agent/cache/common-services/HDFS/  /tmp/Jay/tmp.txt ERROR /tmp/Jay/

. ** NOTICE: ** Here we have modified the PATH variable temporarily, Because if we will not edit it then while running the above command we might see the following error:

We will need to set the PATH just to make sure that when we will try to execute the commands we have the "" present in the PATH. Else we might see the following kind of error while trying to execute the commands. This is because ambari agent executes these commands with the help of "" script. So that script must be available in the PATH.

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HDFS/", line 174, in <module>
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/", line 280, in execute
  File "/var/lib/ambari-agent/cache/common-services/HDFS/", line 58, in start
    import params
  File "/var/lib/ambari-agent/cache/common-services/HDFS/", line 25, in <module>
    from params_linux import *
  File "/var/lib/ambari-agent/cache/common-services/HDFS/", line 20, in <module>
    import status_params
  File "/var/lib/ambari-agent/cache/common-services/HDFS/", line 53, in <module>
    hadoop_conf_dir = conf_select.get_hadoop_conf_dir()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/", line 477, in get_hadoop_conf_dir
    select(stack_name, "hadoop", version)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/", line 315, in select
    shell.checked_call(_get_cmd("set-conf-dir", package, version), logoutput=False, quiet=False, sudo=True)
  File "/usr/lib/python2.6/site-packages/resource_management/core/", line 71, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/", line 93, in checked_call
    tries=tries, try_sleep=try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/", line 141, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/", line 294, in _call
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'ambari-python-wrap /usr/bin/conf-select set-conf-dir --package hadoop --stack-version --conf-version 0' returned 127. /bin/bash: command not found


- Here the "/var/lib/ambari-agent/cache/common-services/HDFS/" script will have the following arguments:

Script expects at least 6 arguments

<JSON_CONFIG> path to command json file. Ex: /var/lib/ambari-agent/data/command-2.json
<BASEDIR> path to service metadata dir. Ex: /var/lib/ambari-agent/cache/common-services/HDFS/
<STROUTPUT> path to file with structured command output (file will be created). Ex:/tmp/my.txt
<LOGGING_LEVEL> log level for stdout. Ex:DEBUG,INFO
<TMP_DIR> temporary directory for executable scripts. Ex: /var/lib/ambari-agent/tmp


- Once we have executed the above commands then we can see that the DataNode is started exactly the same way how we start it from Ambari UI. It also helps us in troubleshooting if ambari-server & agent were not communicating well and to isolate the issue.

Example: (OUTPUT)


[root@c6402 Jay]#  python2.6 /var/lib/ambari-agent/cache/common-services/HDFS/ START /var/lib/ambari-agent/data/command-952.json  /var/lib/ambari-agent/cache/common-services/HDFS/  /tmp/Jay/tmp.txt DEBUG /tmp/Jay/
2016-12-17 08:35:27,489 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version
2016-12-17 08:35:27,489 - Checking if need to create versioned conf dir /etc/hadoop/
2016-12-17 08:35:27,489 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-12-17 08:35:27,506 - call returned (1, '/etc/hadoop/ exist already', '')
2016-12-17 08:35:27,507 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-12-17 08:35:27,523 - checked_call returned (0, '')
2016-12-17 08:35:27,523 - Ensuring that hadoop has the correct symlink structure
2016-12-17 08:35:27,524 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-12-17 08:35:27,529 - Stack Feature Version Info: stack_version=2.5, version=, current_cluster_version= ->
2016-12-17 08:35:27,530 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version
2016-12-17 08:35:27,531 - Checking if need to create versioned conf dir /etc/hadoop/
2016-12-17 08:35:27,531 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-12-17 08:35:27,548 - call returned (1, '/etc/hadoop/ exist already', '')
2016-12-17 08:35:27,549 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-12-17 08:35:27,568 - checked_call returned (0, '')
2016-12-17 08:35:27,568 - Ensuring that hadoop has the correct symlink structure
2016-12-17 08:35:27,568 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-12-17 08:35:27,574 - checked_call['rpm -q --queryformat '%{version}-%{release}' hdp-select | sed -e 's/\.el[0-9]//g''] {'stderr': -1} 08:35:27,588 - checked_call returned (0, '', '')
2016-12-17 08:35:27,591 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2016-12-17 08:35:27,597 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2016-12-17 08:35:27,599 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2016-12-17 08:35:27,606 - Generating config: /usr/hdp/current/hadoop-client/conf/hadoop-policy.xml
2016-12-17 08:35:27,607 - File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-12-17 08:35:27,615 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2016-12-17 08:35:27,622 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-client.xml
2016-12-17 08:35:27,622 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-12-17 08:35:27,631 - Directory['/usr/hdp/current/hadoop-client/conf/secure'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2016-12-17 08:35:27,632 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf/secure', 'configuration_attributes': {}, 'configurations': ...}
2016-12-17 08:35:27,639 - Generating config: /usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml
2016-12-17 08:35:27,639 - File['/usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-12-17 08:35:27,644 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2016-12-17 08:35:27,651 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-server.xml
2016-12-17 08:35:27,652 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-12-17 08:35:27,658 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {'final': {'dfs.datanode.failed.volumes.tolerated': 'true', '': 'true', '': 'true', '': 'true', 'dfs.webhdfs.enabled': 'true'}}, 'configurations': ...}
2016-12-17 08:35:27,665 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml
2016-12-17 08:35:27,666 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-12-17 08:35:27,715 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {'final': {'fs.defaultFS': 'true'}}, 'owner': 'hdfs', 'configurations': ...}
2016-12-17 08:35:27,721 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml
2016-12-17 08:35:27,721 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2016-12-17 08:35:27,738 - File['/usr/hdp/current/hadoop-client/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'hdfs'}
2016-12-17 08:35:27,739 - Directory['/var/lib/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'group': 'hadoop', 'mode': 0751}
2016-12-17 08:35:27,740 - Directory['/var/lib/ambari-agent/data/datanode'] {'create_parents': True, 'mode': 0755}
2016-12-17 08:35:27,744 - Host contains mounts: ['/', '/proc', '/sys', '/dev/pts', '/dev/shm', '/boot', '/proc/sys/fs/binfmt_misc', '/var/lib/nfs/rpc_pipefs'].
2016-12-17 08:35:27,744 - Mount point for directory /hadoop/hdfs/data is /
2016-12-17 08:35:27,744 - Mount point for directory /hadoop/hdfs/data is /
2016-12-17 08:35:27,744 - Last mount for /hadoop/hdfs/data in the history file is /
2016-12-17 08:35:27,744 - Will manage /hadoop/hdfs/data since it's on the same mount point: /
2016-12-17 08:35:27,745 - Forcefully ensuring existence and permissions of the directory: /hadoop/hdfs/data
2016-12-17 08:35:27,745 - Directory['/hadoop/hdfs/data'] {'group': 'hadoop', 'cd_access': 'a', 'create_parents': True, 'ignore_failures': True, 'mode': 0755, 'owner': 'hdfs'}
2016-12-17 08:35:27,749 - Host contains mounts: ['/', '/proc', '/sys', '/dev/pts', '/dev/shm', '/boot', '/proc/sys/fs/binfmt_misc', '/var/lib/nfs/rpc_pipefs'].
2016-12-17 08:35:27,749 - Mount point for directory /hadoop/hdfs/data is /
2016-12-17 08:35:27,749 - File['/var/lib/ambari-agent/data/datanode/dfs_data_dir_mount.hist'] {'content': '\n# This file keeps track of the last known mount-point for each dir.\n# It is safe to delete, since it will get regenerated the next time that the component of the service starts.\n# However, it is not advised to delete this file since Ambari may\n# re-create a dir that used to be mounted on a drive but is now mounted on the root.\n# Comments begin with a hash (#) symbol\n# dir,mount_point\n/hadoop/hdfs/data,/\n', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-12-17 08:35:27,750 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755}
2016-12-17 08:35:27,751 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2016-12-17 08:35:27,752 - Directory['/var/log/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2016-12-17 08:35:27,752 - File['/var/run/hadoop/hdfs/'] {'action': ['delete'], 'not_if': '  -H -E test -f /var/run/hadoop/hdfs/ &&  -H -E pgrep -F /var/run/hadoop/hdfs/'}
2016-12-17 08:35:27,760 - Skipping File['/var/run/hadoop/hdfs/'] due to not_if
2016-12-17 08:35:27,761 - Execute[' su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/current/hadoop-client/sbin/ --config /usr/hdp/current/hadoop-client/conf start datanode''] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/current/hadoop-client/libexec'}, 'not_if': '  -H -E test -f /var/run/hadoop/hdfs/ &&  -H -E pgrep -F /var/run/hadoop/hdfs/'}
2016-12-17 08:35:27,767 - Skipping Execute[' su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/current/hadoop-client/sbin/ --config /usr/hdp/current/hadoop-client/conf start datanode''] due to not_if
2016-12-17 08:35:27,796 - Command: /usr/bin/hdp-select status hadoop-hdfs-datanode > /tmp/tmp6ogtMa
Output: hadoop-hdfs-datanode -



Don't have an account?
Coming from Hortonworks? Activate your account here
Version history
Revision #:
2 of 2
Last update:
‎08-17-2019 07:20 AM
Updated by:
Top Kudoed Authors