Support Questions

Find answers, ask questions, and share your expertise

Map Reduce History sever startup error on freshly installed cluster

avatar
Explorer

Freshly installed a cluster with Ambari 2.4,Hadoop version 2.5.3

Mapreduce history server service wont start after right after installation (or ever after as a matter of fact)

Error and stdout logs below:

2017-10-25 11:57:07,628 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.3.0-37
2017-10-25 11:57:07,628 - Checking if need to create versioned conf dir /etc/hadoop/2.5.3.0-37/0
2017-10-25 11:57:07,629 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2017-10-25 11:57:07,661 - call returned (1, '/etc/hadoop/2.5.3.0-37/0 exist already', '')
2017-10-25 11:57:07,662 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2017-10-25 11:57:07,694 - checked_call returned (0, '')
2017-10-25 11:57:07,696 - Ensuring that hadoop has the correct symlink structure
2017-10-25 11:57:07,696 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-10-25 11:57:07,847 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.3.0-37
2017-10-25 11:57:07,847 - Checking if need to create versioned conf dir /etc/hadoop/2.5.3.0-37/0
2017-10-25 11:57:07,847 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2017-10-25 11:57:07,881 - call returned (1, '/etc/hadoop/2.5.3.0-37/0 exist already', '')
2017-10-25 11:57:07,882 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2017-10-25 11:57:07,916 - checked_call returned (0, '')
2017-10-25 11:57:07,918 - Ensuring that hadoop has the correct symlink structure
2017-10-25 11:57:07,918 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-10-25 11:57:07,920 - Group['hadoop'] {}
2017-10-25 11:57:07,922 - Group['users'] {}
2017-10-25 11:57:07,922 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-10-25 11:57:07,923 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-10-25 11:57:07,924 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-10-25 11:57:07,924 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-10-25 11:57:07,925 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-10-25 11:57:07,926 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-10-25 11:57:07,929 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-10-25 11:57:07,940 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-10-25 11:57:07,941 - Group['hdfs'] {}
2017-10-25 11:57:07,941 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2017-10-25 11:57:07,942 - FS Type: 
2017-10-25 11:57:07,942 - Directory['/etc/hadoop'] {'mode': 0755}
2017-10-25 11:57:07,961 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-10-25 11:57:07,962 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-10-25 11:57:07,979 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2017-10-25 11:57:07,994 - Skipping Execute[('setenforce', '0')] due to not_if
2017-10-25 11:57:07,995 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2017-10-25 11:57:07,999 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2017-10-25 11:57:07,999 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2017-10-25 11:57:08,005 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2017-10-25 11:57:08,007 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2017-10-25 11:57:08,007 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2017-10-25 11:57:08,025 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2017-10-25 11:57:08,026 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2017-10-25 11:57:08,027 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2017-10-25 11:57:08,032 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2017-10-25 11:57:08,042 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2017-10-25 11:57:08,317 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.3.0-37
2017-10-25 11:57:08,318 - Checking if need to create versioned conf dir /etc/hadoop/2.5.3.0-37/0
2017-10-25 11:57:08,318 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2017-10-25 11:57:08,352 - call returned (1, '/etc/hadoop/2.5.3.0-37/0 exist already', '')
2017-10-25 11:57:08,353 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2017-10-25 11:57:08,386 - checked_call returned (0, '')
2017-10-25 11:57:08,387 - Ensuring that hadoop has the correct symlink structure
2017-10-25 11:57:08,387 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-10-25 11:57:08,389 - call['ambari-python-wrap /usr/bin/hdp-select status hadoop-yarn-resourcemanager'] {'timeout': 20}
2017-10-25 11:57:08,428 - call returned (0, 'hadoop-yarn-resourcemanager - 2.5.3.0-37')
2017-10-25 11:57:08,430 - Stack Feature Version Info: stack_version=2.5, version=2.5.3.0-37, current_cluster_version=2.5.3.0-37 -> 2.5.3.0-37
2017-10-25 11:57:08,432 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.3.0-37
2017-10-25 11:57:08,432 - Checking if need to create versioned conf dir /etc/hadoop/2.5.3.0-37/0
2017-10-25 11:57:08,433 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2017-10-25 11:57:08,466 - call returned (1, '/etc/hadoop/2.5.3.0-37/0 exist already', '')
2017-10-25 11:57:08,467 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2017-10-25 11:57:08,504 - checked_call returned (0, '')
2017-10-25 11:57:08,505 - Ensuring that hadoop has the correct symlink structure
2017-10-25 11:57:08,506 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-10-25 11:57:08,512 - HdfsResource['/app-logs'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://ip-172-31-17-42.eu-central-1.compute.internal:8020', 'user': 'hdfs', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': [EMPTY], 'recursive_chmod': True, 'owner': 'yarn', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/app-logs', u'/tmp'], 'mode': 0777}
2017-10-25 11:57:08,517 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://ip-172-31-17-42.eu-central-1.compute.internal:50070/webhdfs/v1/app-logs?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpuH5itp 2>/tmp/tmpAhQd0W''] {'logoutput': None, 'quiet': False}
2017-10-25 11:57:08,591 - call returned (0, '')
2017-10-25 11:57:08,595 - Skipping the operation for not managed DFS directory /app-logs since immutable_paths contains it.
2017-10-25 11:57:08,596 - HdfsResource['/tmp'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://ip-172-31-17-42.eu-central-1.compute.internal:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'hdfs', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/app-logs', u'/tmp'], 'mode': 0777}
2017-10-25 11:57:08,597 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://ip-172-31-17-42.eu-central-1.compute.internal:50070/webhdfs/v1/tmp?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmppQcC56 2>/tmp/tmpQzfVaA''] {'logoutput': None, 'quiet': False}
2017-10-25 11:57:08,655 - call returned (0, '')
2017-10-25 11:57:08,656 - Skipping the operation for not managed DFS directory /tmp since immutable_paths contains it.
2017-10-25 11:57:08,657 - HdfsResource['/tmp/entity-file-history/active'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://ip-172-31-17-42.eu-central-1.compute.internal:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'yarn', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/app-logs', u'/tmp']}
2017-10-25 11:57:08,658 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://ip-172-31-17-42.eu-central-1.compute.internal:50070/webhdfs/v1/tmp/entity-file-history/active?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmphjfD9W 2>/tmp/tmpAXZcwQ''] {'logoutput': None, 'quiet': False}
2017-10-25 11:57:08,716 - call returned (0, '')
2017-10-25 11:57:08,718 - HdfsResource['/mapred'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://ip-172-31-17-42.eu-central-1.compute.internal:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'mapred', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/app-logs', u'/tmp']}
2017-10-25 11:57:08,719 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://ip-172-31-17-42.eu-central-1.compute.internal:50070/webhdfs/v1/mapred?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpUKClkw 2>/tmp/tmpMpWRom''] {'logoutput': None, 'quiet': False}
2017-10-25 11:57:08,776 - call returned (0, '')
2017-10-25 11:57:08,778 - HdfsResource['/mapred/system'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://ip-172-31-17-42.eu-central-1.compute.internal:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'hdfs', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/app-logs', u'/tmp']}
2017-10-25 11:57:08,779 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://ip-172-31-17-42.eu-central-1.compute.internal:50070/webhdfs/v1/mapred/system?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpt7cATo 2>/tmp/tmpTwHyoF''] {'logoutput': None, 'quiet': False}
2017-10-25 11:57:08,837 - call returned (0, '')
2017-10-25 11:57:08,839 - HdfsResource['/mr-history/done'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://ip-172-31-17-42.eu-central-1.compute.internal:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'change_permissions_for_parents': True, 'owner': 'mapred', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/app-logs', u'/tmp'], 'mode': 0777}
2017-10-25 11:57:08,840 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://ip-172-31-17-42.eu-central-1.compute.internal:50070/webhdfs/v1/mr-history/done?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpu7ef01 2>/tmp/tmpgb8iXA''] {'logoutput': None, 'quiet': False}
2017-10-25 11:57:08,901 - call returned (0, '')
2017-10-25 11:57:08,902 - Skipping the operation for not managed DFS directory /mr-history/done since immutable_paths contains it.
2017-10-25 11:57:08,903 - HdfsResource[None] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://ip-172-31-17-42.eu-central-1.compute.internal:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'immutable_paths': [u'/mr-history/done', u'/app-logs', u'/tmp']}
2017-10-25 11:57:08,903 - Directory['/hadoop/mapreduce/jhs'] {'owner': 'mapred', 'group': 'hadoop', 'create_parents': True, 'recursive_ownership': True, 'cd_access': 'a'}
2017-10-25 11:57:08,914 - Directory['/var/log/hadoop-yarn/nodemanager/recovery-state'] {'owner': 'yarn', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2017-10-25 11:57:08,915 - Directory['/var/run/hadoop-yarn'] {'owner': 'yarn', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-10-25 11:57:08,916 - Directory['/var/run/hadoop-yarn/yarn'] {'owner': 'yarn', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-10-25 11:57:08,917 - Directory['/var/log/hadoop-yarn/yarn'] {'owner': 'yarn', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'}
2017-10-25 11:57:08,918 - Directory['/var/run/hadoop-mapreduce'] {'owner': 'mapred', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-10-25 11:57:08,918 - Directory['/var/run/hadoop-mapreduce/mapred'] {'owner': 'mapred', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-10-25 11:57:08,919 - Directory['/var/log/hadoop-mapreduce'] {'owner': 'mapred', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-10-25 11:57:08,920 - Directory['/var/log/hadoop-mapreduce/mapred'] {'owner': 'mapred', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'}
2017-10-25 11:57:08,921 - Directory['/var/log/hadoop-yarn'] {'owner': 'yarn', 'group': 'hadoop', 'ignore_failures': True, 'create_parents': True, 'cd_access': 'a'}
2017-10-25 11:57:08,921 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {'final': {'fs.defaultFS': 'true'}}, 'owner': 'hdfs', 'configurations': ...}
2017-10-25 11:57:08,933 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml
2017-10-25 11:57:08,933 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-10-25 11:57:08,952 - XmlConfig['hdfs-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {'final': {'dfs.support.append': 'true', 'dfs.datanode.data.dir': 'true', 'dfs.namenode.http-address': 'true', 'dfs.namenode.name.dir': 'true', 'dfs.webhdfs.enabled': 'true', 'dfs.datanode.failed.volumes.tolerated': 'true'}}, 'owner': 'hdfs', 'configurations': ...}
2017-10-25 11:57:08,962 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml
2017-10-25 11:57:08,962 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-10-25 11:57:09,005 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...}
2017-10-25 11:57:09,015 - Generating config: /usr/hdp/current/hadoop-client/conf/mapred-site.xml
2017-10-25 11:57:09,015 - File['/usr/hdp/current/hadoop-client/conf/mapred-site.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-10-25 11:57:09,051 - Changing owner for /usr/hdp/current/hadoop-client/conf/mapred-site.xml from 508 to yarn
2017-10-25 11:57:09,052 - XmlConfig['yarn-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...}
2017-10-25 11:57:09,061 - Generating config: /usr/hdp/current/hadoop-client/conf/yarn-site.xml
2017-10-25 11:57:09,061 - File['/usr/hdp/current/hadoop-client/conf/yarn-site.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-10-25 11:57:09,157 - XmlConfig['capacity-scheduler.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...}
2017-10-25 11:57:09,166 - Generating config: /usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml
2017-10-25 11:57:09,167 - File['/usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-10-25 11:57:09,179 - Changing owner for /usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml from 505 to yarn
2017-10-25 11:57:09,182 - File['/etc/security/limits.d/yarn.conf'] {'content': Template('yarn.conf.j2'), 'mode': 0644}
2017-10-25 11:57:09,184 - File['/etc/security/limits.d/mapreduce.conf'] {'content': Template('mapreduce.conf.j2'), 'mode': 0644}
2017-10-25 11:57:09,189 - File['/usr/hdp/current/hadoop-client/conf/yarn-env.sh'] {'content': InlineTemplate(...), 'owner': 'yarn', 'group': 'hadoop', 'mode': 0755}
2017-10-25 11:57:09,190 - File['/usr/hdp/current/hadoop-yarn-client/bin/container-executor'] {'group': 'hadoop', 'mode': 02050}
2017-10-25 11:57:09,192 - File['/usr/hdp/current/hadoop-client/conf/container-executor.cfg'] {'content': Template('container-executor.cfg.j2'), 'group': 'hadoop', 'mode': 0644}
2017-10-25 11:57:09,193 - Directory['/cgroups_test/cpu'] {'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2017-10-25 11:57:09,195 - File['/usr/hdp/current/hadoop-client/conf/mapred-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'mode': 0755}
2017-10-25 11:57:09,197 - File['/usr/hdp/current/hadoop-client/conf/taskcontroller.cfg'] {'content': Template('taskcontroller.cfg.j2'), 'owner': 'hdfs'}
2017-10-25 11:57:09,198 - XmlConfig['mapred-site.xml'] {'owner': 'mapred', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2017-10-25 11:57:09,207 - Generating config: /usr/hdp/current/hadoop-client/conf/mapred-site.xml
2017-10-25 11:57:09,208 - File['/usr/hdp/current/hadoop-client/conf/mapred-site.xml'] {'owner': 'mapred', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-10-25 11:57:09,244 - Changing owner for /usr/hdp/current/hadoop-client/conf/mapred-site.xml from 506 to mapred
2017-10-25 11:57:09,245 - XmlConfig['capacity-scheduler.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2017-10-25 11:57:09,256 - Generating config: /usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml
2017-10-25 11:57:09,256 - File['/usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-10-25 11:57:09,268 - Changing owner for /usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml from 506 to hdfs
2017-10-25 11:57:09,268 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2017-10-25 11:57:09,279 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-client.xml
2017-10-25 11:57:09,279 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-10-25 11:57:09,285 - Directory['/usr/hdp/current/hadoop-client/conf/secure'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-10-25 11:57:09,285 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf/secure', 'configuration_attributes': {}, 'configurations': ...}
2017-10-25 11:57:09,295 - Generating config: /usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml
2017-10-25 11:57:09,295 - File['/usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-10-25 11:57:09,301 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2017-10-25 11:57:09,310 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-server.xml
2017-10-25 11:57:09,310 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-10-25 11:57:09,317 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml.example'] {'owner': 'mapred', 'group': 'hadoop'}
2017-10-25 11:57:09,317 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml.example'] {'owner': 'mapred', 'group': 'hadoop'}
2017-10-25 11:57:09,318 - Called copy_to_hdfs tarball: mapreduce
2017-10-25 11:57:09,318 - Default version is 2.5.3.0-37
2017-10-25 11:57:09,318 - Source file: /usr/hdp/2.5.3.0-37/hadoop/mapreduce.tar.gz , Dest file in HDFS: /hdp/apps/2.5.3.0-37/mapreduce/mapreduce.tar.gz
2017-10-25 11:57:09,319 - HdfsResource['/hdp/apps/2.5.3.0-37/mapreduce'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://ip-172-31-17-42.eu-central-1.compute.internal:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'hdfs', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/app-logs', u'/tmp'], 'mode': 0555}
2017-10-25 11:57:09,320 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://ip-172-31-17-42.eu-central-1.compute.internal:50070/webhdfs/v1/hdp/apps/2.5.3.0-37/mapreduce?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmp4yP0WP 2>/tmp/tmpYWnVBJ''] {'logoutput': None, 'quiet': False}
2017-10-25 11:57:09,381 - call returned (0, '')
2017-10-25 11:57:09,383 - HdfsResource['/hdp/apps/2.5.3.0-37/mapreduce/mapreduce.tar.gz'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'source': '/usr/hdp/2.5.3.0-37/hadoop/mapreduce.tar.gz', 'dfs_type': '', 'default_fs': 'hdfs://ip-172-31-17-42.eu-central-1.compute.internal:8020', 'replace_existing_files': False, 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'hdfs', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'file', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/app-logs', u'/tmp'], 'mode': 0444}
2017-10-25 11:57:09,384 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://ip-172-31-17-42.eu-central-1.compute.internal:50070/webhdfs/v1/hdp/apps/2.5.3.0-37/mapreduce/mapreduce.tar.gz?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpu74RCt 2>/tmp/tmpt4swg4''] {'logoutput': None, 'quiet': False}
2017-10-25 11:57:09,454 - call returned (0, '')
2017-10-25 11:57:09,455 - Creating new file /hdp/apps/2.5.3.0-37/mapreduce/mapreduce.tar.gz in DFS
2017-10-25 11:57:09,456 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT --data-binary @/usr/hdp/2.5.3.0-37/hadoop/mapreduce.tar.gz '"'"'http://ip-172-31-17-42.eu-central-1.compute.internal:50070/webhdfs/v1/hdp/apps/2.5.3.0-37/mapreduce/mapreduce.tar.gz?op=CREATE&user.name=hdfs&overwrite=True&permission=444'"'"' 1>/tmp/tmpRh8dU3 2>/tmp/tmpsOZXYo''] {'logoutput': None, 'quiet': False}
2017-10-25 11:58:26,763 - call returned (52, '')

Command failed after 1 tries
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py", line 190, in <module>
    HistoryServer().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py", line 101, in start
    host_sys_prepped=params.host_sys_prepped)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/copy_tarball.py", line 257, in copy_to_hdfs
    replace_existing_files=replace_existing_files,
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 459, in action_create_on_execute
    self.action_delayed("create")
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 456, in action_delayed
    self.get_hdfs_resource_executor().action_delayed(action_name, self)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 255, in action_delayed
    self._create_resource()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 269, in _create_resource
    self._create_file(self.main_resource.resource.target, source=self.main_resource.resource.source, mode=self.mode)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 322, in _create_file
    self.util.run_command(target, 'CREATE', method='PUT', overwrite=True, assertable_result=False, file_to_put=source, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 179, in run_command
    _, out, err = get_user_call_output(cmd, user=self.run_user, logoutput=self.logoutput, quiet=False)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/get_user_call_output.py", line 61, in get_user_call_output
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w '%{http_code}' -X PUT --data-binary @/usr/hdp/2.5.3.0-37/hadoop/mapreduce.tar.gz 'http://ip-172-31-17-42.eu-central-1.compute.internal:50070/webhdfs/v1/hdp/apps/2.5.3.0-37/mapreduce/mapreduce.tar.gz?op=CREATE&user.name=hdfs&overwrite=True&permission=444' 1>/tmp/tmpRh8dU3 2>/tmp/tmpsOZXYo' returned 52. curl: (52) Empty reply from server
100
1 ACCEPTED SOLUTION

avatar
Explorer

Hi @Aditya Sirna

The name node is up and running, i can issue hdfs dfs -get and -put commands and they execute

While i do not find the mapreduce.tar.gz file at that location (the location exists)

View solution in original post

3 REPLIES 3

avatar
Super Guru

@Imre Ruskal,

From the logs, looks like the namenode is not running. Can you please login to the name node box and run

netstat -tupln | grep 50070

Try starting the namenode and try runing it.

Thanks,

Aditya

avatar
Explorer

Hi @Aditya Sirna

The name node is up and running, i can issue hdfs dfs -get and -put commands and they execute

While i do not find the mapreduce.tar.gz file at that location (the location exists)

avatar
Super Guru

@Imre Ruskal

Can you please try doing the below from history server node and see the response

telnet ip-172-31-17-42.eu-central-1.compute.internal 50070

Also, can you please tell the /etc/hosts entries in the name node box