Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Installation failed during Restart App Timeline Server

avatar
Contributor

Im installing the single HDP-2.3.0.0-2557 on single node through ambari 2.2.0.0 on centos7 and installation is failed during Restart App Timeline Server. Below is the log.

stderr: /var/lib/ambari-agent/data/errors-91.txt Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", line 147, in <module> ApplicationTimelineServer().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute method(env) File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 524, in restart self.start(env, upgrade_type=upgrade_type) File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", line 44, in start service('timelineserver', action='start') File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk return fn(*args, **kwargs) File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service.py", line 79, in service try_sleep=1, File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 158, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 121, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 238, in action_run tries=self.resource.tries, try_sleep=self.resource.try_sleep) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner result = function(command, **kwargs) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call tries=tries, try_sleep=try_sleep) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper result = _call(command, **kwargs_copy) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call raise Fail(err_msg) resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh su yarn -l -s /bin/bash -c 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid && ps -p `cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid`'' returned 1. /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid PID TTY TIME CMD stdout: /var/lib/ambari-agent/data/output-91.txt 2016-03-02 10:24:11,693 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.3.0.0-2557 2016-03-02 10:24:11,693 - Checking if need to create versioned conf dir /etc/hadoop/2.3.0.0-2557/0 2016-03-02 10:24:11,694 - call['conf-select create-conf-dir --package hadoop --stack-version 2.3.0.0-2557 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1} 2016-03-02 10:24:11,713 - call returned (1, '/etc/hadoop/2.3.0.0-2557/0 exist already', '') 2016-03-02 10:24:11,713 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.3.0.0-2557 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False} 2016-03-02 10:24:11,732 - checked_call returned (0, '/usr/hdp/2.3.0.0-2557/hadoop/conf -> /etc/hadoop/2.3.0.0-2557/0') 2016-03-02 10:24:11,732 - Ensuring that hadoop has the correct symlink structure 2016-03-02 10:24:11,732 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2016-03-02 10:24:11,820 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.3.0.0-2557 2016-03-02 10:24:11,820 - Checking if need to create versioned conf dir /etc/hadoop/2.3.0.0-2557/0 2016-03-02 10:24:11,820 - call['conf-select create-conf-dir --package hadoop --stack-version 2.3.0.0-2557 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1} 2016-03-02 10:24:11,841 - call returned (1, '/etc/hadoop/2.3.0.0-2557/0 exist already', '') 2016-03-02 10:24:11,841 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.3.0.0-2557 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False} 2016-03-02 10:24:11,860 - checked_call returned (0, '/usr/hdp/2.3.0.0-2557/hadoop/conf -> /etc/hadoop/2.3.0.0-2557/0') 2016-03-02 10:24:11,860 - Ensuring that hadoop has the correct symlink structure 2016-03-02 10:24:11,860 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2016-03-02 10:24:11,861 - Group['hadoop'] {} 2016-03-02 10:24:11,862 - Group['users'] {} 2016-03-02 10:24:11,862 - User['zookeeper'] {'gid': 'hadoop', 'groups': [u'hadoop']} 2016-03-02 10:24:11,863 - User['ams'] {'gid': 'hadoop', 'groups': [u'hadoop']} 2016-03-02 10:24:11,863 - User['ambari-qa'] {'gid': 'hadoop', 'groups': [u'users']} 2016-03-02 10:24:11,864 - User['hdfs'] {'gid': 'hadoop', 'groups': [u'hadoop']} 2016-03-02 10:24:11,864 - User['yarn'] {'gid': 'hadoop', 'groups': [u'hadoop']} 2016-03-02 10:24:11,865 - User['mapred'] {'gid': 'hadoop', 'groups': [u'hadoop']} 2016-03-02 10:24:11,865 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2016-03-02 10:24:11,867 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2016-03-02 10:24:11,872 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if 2016-03-02 10:24:11,872 - Group['hdfs'] {'ignore_failures': False} 2016-03-02 10:24:11,872 - User['hdfs'] {'ignore_failures': False, 'groups': [u'hadoop', u'hdfs']} 2016-03-02 10:24:11,873 - Directory['/etc/hadoop'] {'mode': 0755} 2016-03-02 10:24:11,883 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2016-03-02 10:24:11,884 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777} 2016-03-02 10:24:11,895 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'} 2016-03-02 10:24:11,901 - Skipping Execute[('setenforce', '0')] due to not_if 2016-03-02 10:24:11,902 - Directory['/var/log/hadoop'] {'owner': 'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'} 2016-03-02 10:24:11,903 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'} 2016-03-02 10:24:11,903 - Changing owner for /var/run/hadoop from 1003 to root 2016-03-02 10:24:11,903 - Changing group for /var/run/hadoop from 1000 to root 2016-03-02 10:24:11,904 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True, 'cd_access': 'a'} 2016-03-02 10:24:11,907 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'} 2016-03-02 10:24:11,908 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'} 2016-03-02 10:24:11,909 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644} 2016-03-02 10:24:11,915 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'} 2016-03-02 10:24:11,915 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755} 2016-03-02 10:24:11,916 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'} 2016-03-02 10:24:11,919 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'} 2016-03-02 10:24:11,922 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755} 2016-03-02 10:24:12,065 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.3.0.0-2557 2016-03-02 10:24:12,065 - Checking if need to create versioned conf dir /etc/hadoop/2.3.0.0-2557/0 2016-03-02 10:24:12,065 - call['conf-select create-conf-dir --package hadoop --stack-version 2.3.0.0-2557 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1} 2016-03-02 10:24:12,086 - call returned (1, '/etc/hadoop/2.3.0.0-2557/0 exist already', '') 2016-03-02 10:24:12,086 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.3.0.0-2557 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False} 2016-03-02 10:24:12,104 - checked_call returned (0, '/usr/hdp/2.3.0.0-2557/hadoop/conf -> /etc/hadoop/2.3.0.0-2557/0') 2016-03-02 10:24:12,104 - Ensuring that hadoop has the correct symlink structure 2016-03-02 10:24:12,104 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2016-03-02 10:24:12,123 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.3.0.0-2557 2016-03-02 10:24:12,123 - Checking if need to create versioned conf dir /etc/hadoop/2.3.0.0-2557/0 2016-03-02 10:24:12,123 - call['conf-select create-conf-dir --package hadoop --stack-version 2.3.0.0-2557 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1} 2016-03-02 10:24:12,142 - call returned (1, '/etc/hadoop/2.3.0.0-2557/0 exist already', '') 2016-03-02 10:24:12,142 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.3.0.0-2557 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False} 2016-03-02 10:24:12,162 - checked_call returned (0, '/usr/hdp/2.3.0.0-2557/hadoop/conf -> /etc/hadoop/2.3.0.0-2557/0') 2016-03-02 10:24:12,162 - Ensuring that hadoop has the correct symlink structure 2016-03-02 10:24:12,162 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2016-03-02 10:24:12,165 - Execute['export HADOOP_LIBEXEC_DIR=/usr/hdp/current/hadoop-client/libexec && /usr/hdp/current/hadoop-yarn-timelineserver/sbin/yarn-daemon.sh --config /usr/hdp/current/hadoop-client/conf stop timelineserver'] {'user': 'yarn'} 2016-03-02 10:24:12,236 - Directory['/var/log/hadoop-yarn/nodemanager/recovery-state'] {'owner': 'yarn', 'mode': 0755, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'} 2016-03-02 10:24:12,236 - Directory['/var/run/hadoop-yarn'] {'owner': 'yarn', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'} 2016-03-02 10:24:12,237 - Directory['/var/run/hadoop-yarn/yarn'] {'owner': 'yarn', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'} 2016-03-02 10:24:12,237 - Directory['/var/log/hadoop-yarn/yarn'] {'owner': 'yarn', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'} 2016-03-02 10:24:12,237 - Directory['/var/run/hadoop-mapreduce'] {'owner': 'mapred', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'} 2016-03-02 10:24:12,238 - Directory['/var/run/hadoop-mapreduce/mapred'] {'owner': 'mapred', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'} 2016-03-02 10:24:12,238 - Directory['/var/log/hadoop-mapreduce'] {'owner': 'mapred', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'} 2016-03-02 10:24:12,238 - Directory['/var/log/hadoop-mapreduce/mapred'] {'owner': 'mapred', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'} 2016-03-02 10:24:12,238 - Directory['/var/log/hadoop-yarn'] {'owner': 'yarn', 'ignore_failures': True, 'recursive': True, 'cd_access': 'a'} 2016-03-02 10:24:12,239 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...} 2016-03-02 10:24:12,245 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml 2016-03-02 10:24:12,245 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2016-03-02 10:24:12,257 - XmlConfig['hdfs-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...} 2016-03-02 10:24:12,263 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml 2016-03-02 10:24:12,263 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2016-03-02 10:24:12,296 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...} 2016-03-02 10:24:12,302 - Generating config: /usr/hdp/current/hadoop-client/conf/mapred-site.xml 2016-03-02 10:24:12,303 - File['/usr/hdp/current/hadoop-client/conf/mapred-site.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2016-03-02 10:24:12,332 - Changing owner for /usr/hdp/current/hadoop-client/conf/mapred-site.xml from 1005 to yarn 2016-03-02 10:24:12,332 - XmlConfig['yarn-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...} 2016-03-02 10:24:12,339 - Generating config: /usr/hdp/current/hadoop-client/conf/yarn-site.xml 2016-03-02 10:24:12,339 - File['/usr/hdp/current/hadoop-client/conf/yarn-site.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2016-03-02 10:24:12,399 - XmlConfig['capacity-scheduler.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...} 2016-03-02 10:24:12,405 - Generating config: /usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml 2016-03-02 10:24:12,405 - File['/usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2016-03-02 10:24:12,413 - Changing owner for /usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml from 1003 to yarn 2016-03-02 10:24:12,413 - Directory['/mongodb/hadoop/yarn/timeline'] {'owner': 'yarn', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'} 2016-03-02 10:24:12,414 - Directory['/mongodb/hadoop/yarn/timeline'] {'owner': 'yarn', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'} 2016-03-02 10:24:12,414 - HdfsResource['/ats/done'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'default_fs': 'hdfs://dbnode1.dev.local:8020', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'change_permissions_for_parents': True, 'owner': 'yarn', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'mode': 0755} 2016-03-02 10:24:12,416 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://dbnode1.dev.local:50070/webhdfs/v1/ats/done?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmppOZiFG 2>/tmp/tmppGwVeJ''] {'logoutput': None, 'quiet': False} 2016-03-02 10:24:12,470 - call returned (0, '') 2016-03-02 10:24:12,471 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://dbnode1.dev.local:50070/webhdfs/v1/ats/done?op=SETPERMISSION&user.name=hdfs&permission=755'"'"' 1>/tmp/tmpQSW9ms 2>/tmp/tmpQgpORU''] {'logoutput': None, 'quiet': False} 2016-03-02 10:24:12,524 - call returned (0, '') 2016-03-02 10:24:12,525 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://dbnode1.dev.local:50070/webhdfs/v1/ats/?op=SETPERMISSION&user.name=hdfs&permission=755'"'"' 1>/tmp/tmp7cw0fP 2>/tmp/tmpdL22N4''] {'logoutput': None, 'quiet': False} 2016-03-02 10:24:12,576 - call returned (0, '') 2016-03-02 10:24:12,577 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://dbnode1.dev.local:50070/webhdfs/v1/ats/done/?op=SETPERMISSION&user.name=hdfs&permission=755'"'"' 1>/tmp/tmp1cM5ex 2>/tmp/tmptE9t39''] {'logoutput': None, 'quiet': False} 2016-03-02 10:24:12,627 - call returned (0, '') 2016-03-02 10:24:12,628 - HdfsResource['/ats/done/'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'default_fs': 'hdfs://dbnode1.dev.local:8020', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'yarn', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'mode': 0700} 2016-03-02 10:24:12,628 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://dbnode1.dev.local:50070/webhdfs/v1/ats/done/?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpMrhe9r 2>/tmp/tmphZCDYZ''] {'logoutput': None, 'quiet': False} 2016-03-02 10:24:12,681 - call returned (0, '') 2016-03-02 10:24:12,682 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://dbnode1.dev.local:50070/webhdfs/v1/ats/done/?op=SETPERMISSION&user.name=hdfs&permission=700'"'"' 1>/tmp/tmpzB10Zg 2>/tmp/tmpr5AniY''] {'logoutput': None, 'quiet': False} 2016-03-02 10:24:12,734 - call returned (0, '') 2016-03-02 10:24:12,735 - HdfsResource['/ats/active'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'default_fs': 'hdfs://dbnode1.dev.local:8020', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'change_permissions_for_parents': True, 'owner': 'yarn', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'mode': 0755} 2016-03-02 10:24:12,736 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://dbnode1.dev.local:50070/webhdfs/v1/ats/active?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmp_v3VuR 2>/tmp/tmppyk0MP''] {'logoutput': None, 'quiet': False} 2016-03-02 10:24:12,790 - call returned (0, '') 2016-03-02 10:24:12,791 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://dbnode1.dev.local:50070/webhdfs/v1/ats/active?op=SETPERMISSION&user.name=hdfs&permission=755'"'"' 1>/tmp/tmpC_1I3y 2>/tmp/tmpFy8XD8''] {'logoutput': None, 'quiet': False} 2016-03-02 10:24:12,843 - call returned (0, '') 2016-03-02 10:24:12,843 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://dbnode1.dev.local:50070/webhdfs/v1/ats/?op=SETPERMISSION&user.name=hdfs&permission=755'"'"' 1>/tmp/tmp3GkTbJ 2>/tmp/tmpbO9rNK''] {'logoutput': None, 'quiet': False} 2016-03-02 10:24:12,895 - call returned (0, '') 2016-03-02 10:24:12,896 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://dbnode1.dev.local:50070/webhdfs/v1/ats/active/?op=SETPERMISSION&user.name=hdfs&permission=755'"'"' 1>/tmp/tmpQrzvfB 2>/tmp/tmp9msR_O''] {'logoutput': None, 'quiet': False} 2016-03-02 10:24:12,950 - call returned (0, '') 2016-03-02 10:24:12,951 - HdfsResource['/ats/active/'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'default_fs': 'hdfs://dbnode1.dev.local:8020', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'yarn', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'mode': 01777} 2016-03-02 10:24:12,952 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://dbnode1.dev.local:50070/webhdfs/v1/ats/active/?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpcNejNF 2>/tmp/tmpTpYnwR''] {'logoutput': None, 'quiet': False} 2016-03-02 10:24:13,015 - call returned (0, '') 2016-03-02 10:24:13,016 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://dbnode1.dev.local:50070/webhdfs/v1/ats/active/?op=SETPERMISSION&user.name=hdfs&permission=1777'"'"' 1>/tmp/tmpO5kfCm 2>/tmp/tmpm59TAS''] {'logoutput': None, 'quiet': False} 2016-03-02 10:24:13,066 - call returned (0, '') 2016-03-02 10:24:13,066 - HdfsResource[None] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'default_fs': 'hdfs://dbnode1.dev.local:8020', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf'} 2016-03-02 10:24:13,066 - File['/etc/hadoop/conf/yarn.exclude'] {'owner': 'yarn', 'group': 'hadoop'} 2016-03-02 10:24:13,069 - File['/etc/security/limits.d/yarn.conf'] {'content': Template('yarn.conf.j2'), 'mode': 0644} 2016-03-02 10:24:13,071 - File['/etc/security/limits.d/mapreduce.conf'] {'content': Template('mapreduce.conf.j2'), 'mode': 0644} 2016-03-02 10:24:13,075 - File['/usr/hdp/current/hadoop-client/conf/yarn-env.sh'] {'content': InlineTemplate(...), 'owner': 'yarn', 'group': 'hadoop', 'mode': 0755} 2016-03-02 10:24:13,075 - Writing File['/usr/hdp/current/hadoop-client/conf/yarn-env.sh'] because contents don't match 2016-03-02 10:24:13,075 - File['/usr/hdp/current/hadoop-yarn-timelineserver/bin/container-executor'] {'group': 'hadoop', 'mode': 02050} 2016-03-02 10:24:13,077 - File['/usr/hdp/current/hadoop-client/conf/container-executor.cfg'] {'content': Template('container-executor.cfg.j2'), 'group': 'hadoop', 'mode': 0644} 2016-03-02 10:24:13,077 - Directory['/cgroups_test/cpu'] {'mode': 0755, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'} 2016-03-02 10:24:13,078 - File['/usr/hdp/current/hadoop-client/conf/mapred-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'mode': 0755} 2016-03-02 10:24:13,080 - File['/usr/hdp/current/hadoop-client/conf/taskcontroller.cfg'] {'content': Template('taskcontroller.cfg.j2'), 'owner': 'hdfs'} 2016-03-02 10:24:13,081 - XmlConfig['mapred-site.xml'] {'owner': 'mapred', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...} 2016-03-02 10:24:13,087 - Generating config: /usr/hdp/current/hadoop-client/conf/mapred-site.xml 2016-03-02 10:24:13,087 - File['/usr/hdp/current/hadoop-client/conf/mapred-site.xml'] {'owner': 'mapred', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2016-03-02 10:24:13,112 - Changing owner for /usr/hdp/current/hadoop-client/conf/mapred-site.xml from 1004 to mapred 2016-03-02 10:24:13,113 - XmlConfig['capacity-scheduler.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...} 2016-03-02 10:24:13,119 - Generating config: /usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml 2016-03-02 10:24:13,119 - File['/usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2016-03-02 10:24:13,127 - Changing owner for /usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml from 1004 to hdfs 2016-03-02 10:24:13,127 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...} 2016-03-02 10:24:13,133 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-client.xml 2016-03-02 10:24:13,133 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2016-03-02 10:24:13,138 - Directory['/usr/hdp/current/hadoop-client/conf/secure'] {'owner': 'root', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'} 2016-03-02 10:24:13,138 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf/secure', 'configuration_attributes': {}, 'configurations': ...} 2016-03-02 10:24:13,144 - Generating config: /usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml 2016-03-02 10:24:13,144 - File['/usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2016-03-02 10:24:13,148 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...} 2016-03-02 10:24:13,155 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-server.xml 2016-03-02 10:24:13,155 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2016-03-02 10:24:13,160 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml.example'] {'owner': 'mapred', 'group': 'hadoop'} 2016-03-02 10:24:13,160 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml.example'] {'owner': 'mapred', 'group': 'hadoop'} 2016-03-02 10:24:13,161 - File['/var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid'] {'action': ['delete'], 'not_if': "ambari-sudo.sh su yarn -l -s /bin/bash -c 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid && ps -p `cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid`'"} 2016-03-02 10:24:13,205 - File['/mongodb/hadoop/yarn/timeline/leveldb-timeline-store.ldb/LOCK'] {'action': ['delete'], 'not_if': "ambari-sudo.sh su yarn -l -s /bin/bash -c 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid && ps -p `cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid`'", 'ignore_failures': True, 'only_if': 'ls /mongodb/hadoop/yarn/timeline/leveldb-timeline-store.ldb/LOCK'} 2016-03-02 10:24:13,251 - Skipping File['/mongodb/hadoop/yarn/timeline/leveldb-timeline-store.ldb/LOCK'] due to only_if 2016-03-02 10:24:13,251 - Execute['ulimit -c unlimited; export HADOOP_LIBEXEC_DIR=/usr/hdp/current/hadoop-client/libexec && /usr/hdp/current/hadoop-yarn-timelineserver/sbin/yarn-daemon.sh --config /usr/hdp/current/hadoop-client/conf start timelineserver'] {'not_if': "ambari-sudo.sh su yarn -l -s /bin/bash -c 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid && ps -p `cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid`'", 'user': 'yarn'} 2016-03-02 10:24:14,361 - Execute['ambari-sudo.sh su yarn -l -s /bin/bash -c 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid && ps -p `cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid`''] {'not_if': "ambari-sudo.sh su yarn -l -s /bin/bash -c 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid && ps -p `cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid`'", 'tries': 5, 'try_sleep': 1} 2016-03-02 10:24:14,468 - Retrying after 1 seconds. Reason: Execution of 'ambari-sudo.sh su yarn -l -s /bin/bash -c 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid && ps -p `cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid`'' returned 1. /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid PID TTY TIME CMD 2016-03-02 10:24:15,530 - Retrying after 1 seconds. Reason: Execution of 'ambari-sudo.sh su yarn -l -s /bin/bash -c 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid && ps -p `cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid`'' returned 1. /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid PID TTY TIME CMD 2016-03-02 10:24:16,593 - Retrying after 1 seconds. Reason: Execution of 'ambari-sudo.sh su yarn -l -s /bin/bash -c 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid && ps -p `cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid`'' returned 1. /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid PID TTY TIME CMD 2016-03-02 10:24:17,657 - Retrying after 1 seconds. Reason: Execution of 'ambari-sudo.sh su yarn -l -s /bin/bash -c 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid && ps -p `cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid`'' returned 1. /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid PID TTY TIME CMD
1 ACCEPTED SOLUTION

avatar
Contributor

I installed it on fresh VM. Now it is working fine. I installed the ambari metrics also. But still it is not capturing any metrics.

View solution in original post

10 REPLIES 10

avatar
Master Mentor

@vikas reddy please consider installing HDP 2.3.4. What is in the /var/lib/ambari-agent/data/errors-91.txt file? Also provide Ambari logs.

avatar
Contributor

While installing from ambari, it is pointing to HDP 2.3.4.0 version only. Is there any other method to upgrade it now. I uploaded the error log, ambari log and output log in my main question.

avatar

Ambari 2.2 had to change some default configs for YARN in HDP 2.3.4.0 that are not compatible out-of-the-box with HDP 2.3.0.0. You need to change these configs in yarn-site and then restart YARN.

yarn.timeline-service.version=1.0
yarn.timeline-service.store-class=org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore

avatar
Contributor

thanks this worked. i changed the store class to LevelDb and it worked.

fyi: my env is ambari 2.2.0.0 , hdp 2.3.2 , redhat7

avatar
Contributor

ambari-server-02.txtvar-lib-ambari-agent-data-errors-91.txt

avatar
Contributor

avatar
Contributor

I installed it on fresh VM. Now it is working fine. I installed the ambari metrics also. But still it is not capturing any metrics.

avatar
Contributor

It is network issue. I just fired the below command.

iptables --flush

Now metrics is showing in dashboard.

avatar
Master Mentor

restart all services after metrics server install, if still doesn't appear, go through ambari metrics troubleshooting guide