Member since
09-15-2019
24
Posts
0
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2268 | 10-10-2019 09:23 AM | |
2806 | 09-18-2019 05:57 AM |
10-09-2019
07:31 PM
@Shelton MapReduce2 is not running because HISTORY SERVER does not start. I tried to fix but failed. Can you help me see what the error is? (see error message above). Thank you!
... View more
10-07-2019
10:53 AM
Hello, I am using HDP3.0, Ambari 2.7.1. My MapReduce2 is not running, it reports the following error:
Traceback (most recent call last): File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/historyserver.py", line 126, in <module> HistoryServer().execute() File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 351, in execute method(env) File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/historyserver.py", line 94, in start skip=params.sysprep_skip_copy_tarballs_hdfs) File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/copy_tarball.py", line 516, in copy_to_hdfs replace_existing_files=replace_existing_files, File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__ self.env.run() File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run self.run_action(resource, action) File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action provider_action() File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 672, in action_create_on_execute self.action_delayed("create") File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 669, in action_delayed self.get_hdfs_resource_executor().action_delayed(action_name, self) File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 368, in action_delayed self.action_delayed_for_nameservice(None, action_name, main_resource) File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 398, in action_delayed_for_nameservice self._create_resource() File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 414, in _create_resource self._create_file(self.main_resource.resource.target, source=self.main_resource.resource.source, mode=self.mode) File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 529, in _create_file self.util.run_command(target, 'CREATE', method='PUT', overwrite=True, assertable_result=False, file_to_put=source, **kwargs) File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 209, in run_command return self._run_command(*args, **kwargs) File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 290, in _run_command raise WebHDFSCallException(err_msg, result_dict) resource_management.libraries.providers.hdfs_resource.WebHDFSCallException: Execution of 'curl -sS -L -w '%{http_code}' -X PUT --data-binary @/usr/hdp/3.0.1.0-187/hadoop/mapreduce.tar.gz -H 'Content-Type: application/octet-stream' 'http://sandbox-hdp.hortonworks.com:50070/webhdfs/v1/hdp/apps/3.0.1.0-187/mapreduce/mapreduce.tar.gz?op=CREATE&user.name=hdfs&overwrite=True&permission=444'' returned status_code=403. { "RemoteException": { "exception": "IOException", "javaClassName": "java.io.IOException", "message": "Failed to find datanode, suggest to check cluster health. excludeDatanodes=null" } }
----------------
2019-10-07 17:18:40,164 - Stack Feature Version Info: Cluster Stack=3.0, Command Stack=None, Command Version=3.0.1.0-187 -> 3.0.1.0-187 2019-10-07 17:18:40,210 - Using hadoop conf dir: /usr/hdp/3.0.1.0-187/hadoop/conf 2019-10-07 17:18:40,607 - Stack Feature Version Info: Cluster Stack=3.0, Command Stack=None, Command Version=3.0.1.0-187 -> 3.0.1.0-187 2019-10-07 17:18:40,619 - Using hadoop conf dir: /usr/hdp/3.0.1.0-187/hadoop/conf 2019-10-07 17:18:40,623 - Group['livy'] {} 2019-10-07 17:18:40,626 - Group['spark'] {} 2019-10-07 17:18:40,626 - Group['ranger'] {} 2019-10-07 17:18:40,627 - Group['hdfs'] {} 2019-10-07 17:18:40,627 - Group['zeppelin'] {} 2019-10-07 17:18:40,628 - Group['hadoop'] {} 2019-10-07 17:18:40,628 - Group['users'] {} 2019-10-07 17:18:40,629 - Group['knox'] {} 2019-10-07 17:18:40,630 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-10-07 17:18:40,633 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-10-07 17:18:40,635 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-10-07 17:18:40,636 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-10-07 17:18:40,638 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-10-07 17:18:40,640 - User['superset'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-10-07 17:18:40,642 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None} 2019-10-07 17:18:40,644 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-10-07 17:18:40,646 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger', 'hadoop'], 'uid': None} 2019-10-07 17:18:40,648 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None} 2019-10-07 17:18:40,650 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['zeppelin', 'hadoop'], 'uid': None} 2019-10-07 17:18:40,652 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['livy', 'hadoop'], 'uid': None} 2019-10-07 17:18:40,654 - User['druid'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-10-07 17:18:40,656 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['spark', 'hadoop'], 'uid': None} 2019-10-07 17:18:40,658 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None} 2019-10-07 17:18:40,660 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-10-07 17:18:40,662 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None} 2019-10-07 17:18:40,663 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-10-07 17:18:40,665 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-10-07 17:18:40,667 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-10-07 17:18:40,669 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-10-07 17:18:40,672 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'knox'], 'uid': None} 2019-10-07 17:18:40,673 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2019-10-07 17:18:40,677 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2019-10-07 17:18:40,686 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if 2019-10-07 17:18:40,687 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'} 2019-10-07 17:18:40,689 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2019-10-07 17:18:40,692 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2019-10-07 17:18:40,694 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {} 2019-10-07 17:18:40,708 - call returned (0, '1015') 2019-10-07 17:18:40,709 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1015'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'} 2019-10-07 17:18:40,718 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1015'] due to not_if 2019-10-07 17:18:40,718 - Group['hdfs'] {} 2019-10-07 17:18:40,719 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']} 2019-10-07 17:18:40,720 - FS Type: HDFS 2019-10-07 17:18:40,721 - Directory['/etc/hadoop'] {'mode': 0755} 2019-10-07 17:18:40,752 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2019-10-07 17:18:40,753 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777} 2019-10-07 17:18:40,781 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'} 2019-10-07 17:18:40,789 - Skipping Execute[('setenforce', '0')] due to not_if 2019-10-07 17:18:40,790 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'} 2019-10-07 17:18:40,794 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'} 2019-10-07 17:18:40,796 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'cd_access': 'a'} 2019-10-07 17:18:40,797 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'} 2019-10-07 17:18:40,804 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'} 2019-10-07 17:18:40,808 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'} 2019-10-07 17:18:40,817 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644} 2019-10-07 17:18:40,839 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2019-10-07 17:18:40,840 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755} 2019-10-07 17:18:40,843 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'} 2019-10-07 17:18:40,851 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644} 2019-10-07 17:18:40,858 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755} 2019-10-07 17:18:40,866 - Skipping unlimited key JCE policy check and setup since the Java VM is not managed by Ambari 2019-10-07 17:18:41,374 - Using hadoop conf dir: /usr/hdp/3.0.1.0-187/hadoop/conf 2019-10-07 17:18:41,375 - Stack Feature Version Info: Cluster Stack=3.0, Command Stack=None, Command Version=3.0.1.0-187 -> 3.0.1.0-187 2019-10-07 17:18:41,435 - Using hadoop conf dir: /usr/hdp/3.0.1.0-187/hadoop/conf 2019-10-07 17:18:41,460 - Directory['/var/log/hadoop-yarn'] {'group': 'hadoop', 'cd_access': 'a', 'create_parents': True, 'ignore_failures': True, 'mode': 0775, 'owner': 'yarn'} 2019-10-07 17:18:41,463 - Directory['/var/run/hadoop-yarn'] {'owner': 'yarn', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'} 2019-10-07 17:18:41,464 - Directory['/var/run/hadoop-yarn/yarn'] {'owner': 'yarn', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'} 2019-10-07 17:18:41,465 - Directory['/var/log/hadoop-yarn/yarn'] {'owner': 'yarn', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'} 2019-10-07 17:18:41,466 - Directory['/var/run/hadoop-mapreduce'] {'owner': 'mapred', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'} 2019-10-07 17:18:41,467 - Directory['/var/run/hadoop-mapreduce/mapred'] {'owner': 'mapred', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'} 2019-10-07 17:18:41,468 - Directory['/var/log/hadoop-mapreduce'] {'owner': 'mapred', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'} 2019-10-07 17:18:41,469 - Directory['/var/log/hadoop-mapreduce/mapred'] {'owner': 'mapred', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'} 2019-10-07 17:18:41,470 - Directory['/usr/hdp/3.0.1.0-187/hadoop/conf/embedded-yarn-ats-hbase'] {'owner': 'yarn-ats', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'} 2019-10-07 17:18:41,472 - HdfsResource['/app-logs'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.0.1.0-187/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://sandbox-hdp.hortonworks.com:8020', 'user': 'hdfs', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'recursive_chmod': True, 'owner': 'yarn', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/3.0.1.0-187/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 01777} 2019-10-07 17:18:41,476 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://sandbox-hdp.hortonworks.com:50070/webhdfs/v1/app-logs?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmprS6_LP 2>/tmp/tmpxbComv''] {'logoutput': None, 'quiet': False} 2019-10-07 17:18:41,574 - call returned (0, '') 2019-10-07 17:18:41,574 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16398,"group":"hadoop","length":0,"modificationTime":1570367606604,"owner":"yarn","pathSuffix":"","permission":"1777","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'') 2019-10-07 17:18:41,577 - Skipping the operation for not managed DFS directory /app-logs since immutable_paths contains it. 2019-10-07 17:18:41,578 - HdfsResource['/tmp'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.0.1.0-187/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://sandbox-hdp.hortonworks.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'hdfs', 'hadoop_conf_dir': '/usr/hdp/3.0.1.0-187/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0777} 2019-10-07 17:18:41,583 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://sandbox-hdp.hortonworks.com:50070/webhdfs/v1/tmp?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpUly_vs 2>/tmp/tmpdAcCCc''] {'logoutput': None, 'quiet': False} 2019-10-07 17:18:41,686 - call returned (0, '') 2019-10-07 17:18:41,686 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":16386,"group":"hdfs","length":0,"modificationTime":1570367607146,"owner":"hdfs","pathSuffix":"","permission":"777","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'') 2019-10-07 17:18:41,687 - Skipping the operation for not managed DFS directory /tmp since immutable_paths contains it. 2019-10-07 17:18:41,687 - HdfsResource['/tmp/entity-file-history/active'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.0.1.0-187/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://sandbox-hdp.hortonworks.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'yarn', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/3.0.1.0-187/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp']} 2019-10-07 17:18:41,691 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://sandbox-hdp.hortonworks.com:50070/webhdfs/v1/tmp/entity-file-history/active?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpTaWOjs 2>/tmp/tmpfIkOga''] {'logoutput': None, 'quiet': False} 2019-10-07 17:18:41,769 - call returned (0, '') 2019-10-07 17:18:41,770 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16400,"group":"hadoop","length":0,"modificationTime":1570367607146,"owner":"yarn","pathSuffix":"","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'') 2019-10-07 17:18:41,771 - HdfsResource['/mapred'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.0.1.0-187/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://sandbox-hdp.hortonworks.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'mapred', 'hadoop_conf_dir': '/usr/hdp/3.0.1.0-187/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp']} 2019-10-07 17:18:41,772 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://sandbox-hdp.hortonworks.com:50070/webhdfs/v1/mapred?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpMoaFsU 2>/tmp/tmpy2iao4''] {'logoutput': None, 'quiet': False} 2019-10-07 17:18:41,846 - call returned (0, '') 2019-10-07 17:18:41,846 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":16401,"group":"hdfs","length":0,"modificationTime":1570367607511,"owner":"mapred","pathSuffix":"","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'') 2019-10-07 17:18:41,847 - HdfsResource['/mapred/system'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.0.1.0-187/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://sandbox-hdp.hortonworks.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'hdfs', 'hadoop_conf_dir': '/usr/hdp/3.0.1.0-187/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp']} 2019-10-07 17:18:41,849 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://sandbox-hdp.hortonworks.com:50070/webhdfs/v1/mapred/system?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpH2miBc 2>/tmp/tmpYsk6zG''] {'logoutput': None, 'quiet': False} 2019-10-07 17:18:41,925 - call returned (0, '') 2019-10-07 17:18:41,925 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16402,"group":"hdfs","length":0,"modificationTime":1570367607511,"owner":"hdfs","pathSuffix":"","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'') 2019-10-07 17:18:41,927 - HdfsResource['/mr-history/done'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.0.1.0-187/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://sandbox-hdp.hortonworks.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'change_permissions_for_parents': True, 'owner': 'mapred', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/3.0.1.0-187/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0777} 2019-10-07 17:18:41,928 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://sandbox-hdp.hortonworks.com:50070/webhdfs/v1/mr-history/done?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpT8iqsx 2>/tmp/tmpW2LgG6''] {'logoutput': None, 'quiet': False} 2019-10-07 17:18:42,006 - call returned (0, '') 2019-10-07 17:18:42,006 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16404,"group":"hadoop","length":0,"modificationTime":1570367607706,"owner":"mapred","pathSuffix":"","permission":"777","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'') 2019-10-07 17:18:42,007 - Skipping the operation for not managed DFS directory /mr-history/done since immutable_paths contains it. 2019-10-07 17:18:42,008 - HdfsResource[None] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.0.1.0-187/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://sandbox-hdp.hortonworks.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': '/usr/hdp/3.0.1.0-187/hadoop/conf', 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp']} 2019-10-07 17:18:42,009 - Directory['/hadoop/mapreduce/jhs'] {'owner': 'mapred', 'group': 'hadoop', 'create_parents': True, 'recursive_ownership': True, 'cd_access': 'a'} 2019-10-07 17:18:42,020 - Directory['/usr/lib/ambari-logsearch-logfeeder/conf'] {'create_parents': True, 'mode': 0755, 'cd_access': 'a'} 2019-10-07 17:18:42,021 - Generate Log Feeder config file: /usr/lib/ambari-logsearch-logfeeder/conf/input.config-mapreduce2.json 2019-10-07 17:18:42,021 - File['/usr/lib/ambari-logsearch-logfeeder/conf/input.config-mapreduce2.json'] {'content': Template('input.config-mapreduce2.json.j2'), 'mode': 0644} 2019-10-07 17:18:42,030 - Directory['/usr/lib/ambari-logsearch-logfeeder/conf'] {'create_parents': True, 'mode': 0755, 'cd_access': 'a'} 2019-10-07 17:18:42,030 - Generate Log Feeder config file: /usr/lib/ambari-logsearch-logfeeder/conf/input.config-yarn.json 2019-10-07 17:18:42,030 - File['/usr/lib/ambari-logsearch-logfeeder/conf/input.config-yarn.json'] {'content': Template('input.config-yarn.json.j2'), 'mode': 0644} 2019-10-07 17:18:42,032 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.0.1.0-187/hadoop/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'hdfs', 'configurations': ...} 2019-10-07 17:18:42,045 - Generating config: /usr/hdp/3.0.1.0-187/hadoop/conf/core-site.xml 2019-10-07 17:18:42,045 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2019-10-07 17:18:42,096 - Writing File['/usr/hdp/3.0.1.0-187/hadoop/conf/core-site.xml'] because contents don't match 2019-10-07 17:18:42,097 - XmlConfig['hdfs-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.0.1.0-187/hadoop/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'dfs.datanode.failed.volumes.tolerated': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true'}}, 'owner': 'hdfs', 'configurations': ...} 2019-10-07 17:18:42,109 - Generating config: /usr/hdp/3.0.1.0-187/hadoop/conf/hdfs-site.xml 2019-10-07 17:18:42,109 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2019-10-07 17:18:42,171 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.0.1.0-187/hadoop/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...} 2019-10-07 17:18:42,182 - Generating config: /usr/hdp/3.0.1.0-187/hadoop/conf/mapred-site.xml 2019-10-07 17:18:42,183 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/mapred-site.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2019-10-07 17:18:42,240 - Changing owner for /usr/hdp/3.0.1.0-187/hadoop/conf/mapred-site.xml from 1005 to yarn 2019-10-07 17:18:42,240 - XmlConfig['yarn-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.0.1.0-187/hadoop/conf', 'mode': 0644, 'configuration_attributes': {u'hidden': {u'hadoop.registry.dns.bind-port': u'true'}}, 'owner': 'yarn', 'configurations': ...} 2019-10-07 17:18:42,250 - Generating config: /usr/hdp/3.0.1.0-187/hadoop/conf/yarn-site.xml 2019-10-07 17:18:42,250 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/yarn-site.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2019-10-07 17:18:42,393 - XmlConfig['capacity-scheduler.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.0.1.0-187/hadoop/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...} 2019-10-07 17:18:42,403 - Generating config: /usr/hdp/3.0.1.0-187/hadoop/conf/capacity-scheduler.xml 2019-10-07 17:18:42,403 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/capacity-scheduler.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2019-10-07 17:18:42,419 - Changing owner for /usr/hdp/3.0.1.0-187/hadoop/conf/capacity-scheduler.xml from 1003 to yarn 2019-10-07 17:18:42,420 - XmlConfig['hbase-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.0.1.0-187/hadoop/conf/embedded-yarn-ats-hbase', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn-ats', 'configurations': ...} 2019-10-07 17:18:42,430 - Generating config: /usr/hdp/3.0.1.0-187/hadoop/conf/embedded-yarn-ats-hbase/hbase-site.xml 2019-10-07 17:18:42,430 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/embedded-yarn-ats-hbase/hbase-site.xml'] {'owner': 'yarn-ats', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2019-10-07 17:18:42,470 - XmlConfig['resource-types.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.0.1.0-187/hadoop/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': {u'yarn.resource-types.yarn.io_gpu.maximum-allocation': u'8', u'yarn.resource-types': u''}} 2019-10-07 17:18:42,479 - Generating config: /usr/hdp/3.0.1.0-187/hadoop/conf/resource-types.xml 2019-10-07 17:18:42,480 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/resource-types.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2019-10-07 17:18:42,484 - File['/etc/security/limits.d/yarn.conf'] {'content': Template('yarn.conf.j2'), 'mode': 0644} 2019-10-07 17:18:42,486 - File['/etc/security/limits.d/mapreduce.conf'] {'content': Template('mapreduce.conf.j2'), 'mode': 0644} 2019-10-07 17:18:42,496 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/yarn-env.sh'] {'content': InlineTemplate(...), 'owner': 'yarn', 'group': 'hadoop', 'mode': 0755} 2019-10-07 17:18:42,497 - File['/usr/hdp/3.0.1.0-187/hadoop-yarn/bin/container-executor'] {'group': 'hadoop', 'mode': 02050} 2019-10-07 17:18:42,501 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/container-executor.cfg'] {'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644} 2019-10-07 17:18:42,502 - Directory['/cgroups_test/cpu'] {'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'} 2019-10-07 17:18:42,505 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/mapred-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'mode': 0755} 2019-10-07 17:18:42,506 - Directory['/var/log/hadoop-yarn/nodemanager/recovery-state'] {'owner': 'yarn', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'} 2019-10-07 17:18:42,509 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/taskcontroller.cfg'] {'content': Template('taskcontroller.cfg.j2'), 'owner': 'hdfs'} 2019-10-07 17:18:42,510 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.0.1.0-187/hadoop/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'mapred', 'configurations': ...} 2019-10-07 17:18:42,519 - Generating config: /usr/hdp/3.0.1.0-187/hadoop/conf/mapred-site.xml 2019-10-07 17:18:42,519 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/mapred-site.xml'] {'owner': 'mapred', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2019-10-07 17:18:42,569 - Changing owner for /usr/hdp/3.0.1.0-187/hadoop/conf/mapred-site.xml from 1004 to mapred 2019-10-07 17:18:42,569 - XmlConfig['capacity-scheduler.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.0.1.0-187/hadoop/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...} 2019-10-07 17:18:42,578 - Generating config: /usr/hdp/3.0.1.0-187/hadoop/conf/capacity-scheduler.xml 2019-10-07 17:18:42,579 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/capacity-scheduler.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2019-10-07 17:18:42,596 - Changing owner for /usr/hdp/3.0.1.0-187/hadoop/conf/capacity-scheduler.xml from 1004 to hdfs 2019-10-07 17:18:42,596 - XmlConfig['ssl-client.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.0.1.0-187/hadoop/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...} 2019-10-07 17:18:42,607 - Generating config: /usr/hdp/3.0.1.0-187/hadoop/conf/ssl-client.xml 2019-10-07 17:18:42,607 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2019-10-07 17:18:42,614 - Directory['/usr/hdp/3.0.1.0-187/hadoop/conf/secure'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'} 2019-10-07 17:18:42,615 - XmlConfig['ssl-client.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.0.1.0-187/hadoop/conf/secure', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...} 2019-10-07 17:18:42,625 - Generating config: /usr/hdp/3.0.1.0-187/hadoop/conf/secure/ssl-client.xml 2019-10-07 17:18:42,625 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2019-10-07 17:18:42,631 - XmlConfig['ssl-server.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.0.1.0-187/hadoop/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...} 2019-10-07 17:18:42,641 - Generating config: /usr/hdp/3.0.1.0-187/hadoop/conf/ssl-server.xml 2019-10-07 17:18:42,641 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2019-10-07 17:18:42,651 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/ssl-client.xml.example'] {'owner': 'mapred', 'group': 'hadoop', 'mode': 0644} 2019-10-07 17:18:42,652 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/ssl-server.xml.example'] {'owner': 'mapred', 'group': 'hadoop', 'mode': 0644} 2019-10-07 17:18:42,656 - Called copy_to_hdfs tarball: mapreduce 2019-10-07 17:18:42,657 - Stack Feature Version Info: Cluster Stack=3.0, Command Stack=None, Command Version=3.0.1.0-187 -> 3.0.1.0-187 2019-10-07 17:18:42,657 - Tarball version was calcuated as 3.0.1.0-187. Use Command Version: True 2019-10-07 17:18:42,657 - Source file: /usr/hdp/3.0.1.0-187/hadoop/mapreduce.tar.gz , Dest file in HDFS: /hdp/apps/3.0.1.0-187/mapreduce/mapreduce.tar.gz 2019-10-07 17:18:42,657 - Stack Feature Version Info: Cluster Stack=3.0, Command Stack=None, Command Version=3.0.1.0-187 -> 3.0.1.0-187 2019-10-07 17:18:42,657 - Tarball version was calcuated as 3.0.1.0-187. Use Command Version: True 2019-10-07 17:18:42,658 - HdfsResource['/hdp/apps/3.0.1.0-187/mapreduce'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.0.1.0-187/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://sandbox-hdp.hortonworks.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'hdfs', 'hadoop_conf_dir': '/usr/hdp/3.0.1.0-187/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0555} 2019-10-07 17:18:42,659 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://sandbox-hdp.hortonworks.com:50070/webhdfs/v1/hdp/apps/3.0.1.0-187/mapreduce?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpMi7kGa 2>/tmp/tmpp7Jufp''] {'logoutput': None, 'quiet': False} 2019-10-07 17:18:42,719 - call returned (0, '') 2019-10-07 17:18:42,719 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16408,"group":"hdfs","length":0,"modificationTime":1570367608787,"owner":"hdfs","pathSuffix":"","permission":"555","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'') 2019-10-07 17:18:42,720 - HdfsResource['/hdp/apps/3.0.1.0-187/mapreduce/mapreduce.tar.gz'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.0.1.0-187/hadoop/bin', 'keytab': [EMPTY], 'source': '/usr/hdp/3.0.1.0-187/hadoop/mapreduce.tar.gz', 'dfs_type': 'HDFS', 'default_fs': 'hdfs://sandbox-hdp.hortonworks.com:8020', 'replace_existing_files': False, 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'hdfs', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/3.0.1.0-187/hadoop/conf', 'type': 'file', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0444} 2019-10-07 17:18:42,722 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://sandbox-hdp.hortonworks.com:50070/webhdfs/v1/hdp/apps/3.0.1.0-187/mapreduce/mapreduce.tar.gz?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpeMvSJj 2>/tmp/tmpfLBvzk''] {'logoutput': None, 'quiet': False} 2019-10-07 17:18:42,782 - call returned (0, '') 2019-10-07 17:18:42,783 - get_user_call_output returned (0, u'{"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File does not exist: /hdp/apps/3.0.1.0-187/mapreduce/mapreduce.tar.gz"}}404', u'') 2019-10-07 17:18:42,783 - Creating new file /hdp/apps/3.0.1.0-187/mapreduce/mapreduce.tar.gz in DFS 2019-10-07 17:18:42,784 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT --data-binary @/usr/hdp/3.0.1.0-187/hadoop/mapreduce.tar.gz -H '"'"'Content-Type: application/octet-stream'"'"' '"'"'http://sandbox-hdp.hortonworks.com:50070/webhdfs/v1/hdp/apps/3.0.1.0-187/mapreduce/mapreduce.tar.gz?op=CREATE&user.name=hdfs&overwrite=True&permission=444'"'"' 1>/tmp/tmpACYVNU 2>/tmp/tmpHySFOw''] {'logoutput': None, 'quiet': False} 2019-10-07 17:18:43,006 - call returned (0, '') 2019-10-07 17:18:43,007 - get_user_call_output returned (0, u'{"RemoteException":{"exception":"IOException","javaClassName":"java.io.IOException","message":"Failed to find datanode, suggest to check cluster health. excludeDatanodes=null"}}403', u'')
Command failed after 1 tries
Thanks to everyone showing me how to fix the error. Thank you very much!
... View more
Labels:
- Labels:
-
Apache Ambari
10-07-2019
06:30 AM
@Shelton Thanks to your reminder, I will learn from experience so as not to affect the community. Thanks for your help!
... View more
10-06-2019
05:56 AM
Hi @Shelton Before, reading your instructions, I tried running this command sudo -u hdfs hdfs namenode -format After executing the above command, NAMENODE has run Thank you for your enthusiastic help
... View more
10-06-2019
12:36 AM
@Shelton I have followed the steps he instructed, but still cannot fix. I have attached 2 photos, thanks for reviewing. Thank you very much!
... View more
10-04-2019
11:47 PM
@Herman I think the problem you said is right. Can you show me how to fix it?
... View more
09-30-2019
11:06 AM
Hello @jsensharma I checked NameNode has no listening on ports 8020 and 50070 Results after executing the ps -ef | command grep -i NameNode: [root@sandbox-hdp ~]# ps -ef | grep -i NameNode hdfs 1879 1 0 07:47 ? 00:00:06 /usr/lib/jvm/java/bin/java -Dproc_secondarynamenode -Dhdp.version=3.0.1.0-187 -Djava.net.preferIPv4Stack=true -Dhdp.version=3.0.1.0-187 -Dhdfs.audit.logger=INFO,NullAppender -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=128m -XX:MaxNewSize=128m -Xloggc:/var/log/hadoop/hdfs/gc.log-201909300747 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms1024m -Xmx1024m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError=/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node -Dyarn.log.dir=/var/log/hadoop/hdfs -Dyarn.log.file=hadoop-hdfs-secondarynamenode-sandbox-hdp.hortonworks.com.log -Dyarn.home.dir=/usr/hdp/3.0.1.0-187/hadoop-yarn -Dyarn.root.logger=INFO,console -Djava.library.path=:/usr/hdp/3.0.1.0-187/hadoop/lib/native/Linux-amd64-64:/usr/hdp/3.0.1.0-187/hadoop/lib/native -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop-hdfs-secondarynamenode-sandbox-hdp.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/3.0.1.0-187/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode root 12433 12140 0 07:59 pts/0 00:00:00 grep --color=auto -i NameNode File Log ==> /var/log/hadoop/hdfs/hadoop-hdfs-secondarynamenode-sandbox-hdp.hortonworks.com.log <== 2019-09-30 09:04:11,529 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:12,530 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:13,531 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:14,532 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:15,533 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:16,535 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:17,540 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:18,543 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:19,544 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 10 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:20,545 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 11 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:21,546 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 12 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:22,547 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 13 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:23,549 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 14 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:24,552 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 15 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:25,554 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 16 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:26,555 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 17 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:27,556 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 18 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:28,557 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 19 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:29,558 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 20 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:30,559 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 21 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:31,560 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 22 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:32,561 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 23 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:33,563 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 24 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:34,564 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 25 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:35,565 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 26 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:36,567 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 27 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:37,569 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 28 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:38,570 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 29 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:39,573 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 30 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:40,575 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 31 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:41,581 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 32 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:42,584 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 33 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:43,594 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 34 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:44,595 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 35 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:45,599 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 36 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:46,605 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 37 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:47,607 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 38 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:48,608 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 39 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:49,609 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 40 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2019-09-30 09:04:50,615 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: sandbox-hdp.hortonworks.com/172.18.0.2:8020. Already tried 41 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) ==> /var/log/hadoop/hdfs/hadoop-hdfs-secondarynamenode-sandbox-hdp.hortonworks.com.out <== java.net.ConnectException: Call From sandbox-hdp.hortonworks.com/172.18.0.2 to sandbox-hdp.hortonworks.com:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.GeneratedConstructorAccessor9.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:755) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1501) at org.apache.hadoop.ipc.Client.call(Client.java:1443) at org.apache.hadoop.ipc.Client.call(Client.java:1353) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) at com.sun.proxy.$Proxy10.getTransactionId(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.getTransactionID(NamenodeProtocolTranslatorPB.java:130) at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy11.getTransactionID(Unknown Source) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.countUncheckpointedTxns(SecondaryNameNode.java:660) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.shouldCheckpointBasedOnCount(SecondaryNameNode.java:668) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:358) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$1.run(SecondaryNameNode.java:325) at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:482) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:321) at java.lang.Thread.run(Thread.java:748) Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:687) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:790) at org.apache.hadoop.ipc.Client$Connection.access$3600(Client.java:410) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1558) at org.apache.hadoop.ipc.Client.call(Client.java:1389) ... 21 more I have followed the instructions in this article, but still cannot fix it https://www.ibm.com/support/pages/ibm-biginsights-how-configure-hadoop-client-port-8020-bind-all-network-interfaces Thanks for reviewing help, sincerely thank you!
... View more
09-30-2019
05:13 AM
Please guide how to follow step by step. thank you very much!
... View more
09-29-2019
07:26 PM
Hello,
- I've setup a Sandbox HDP 3.0.1. - Determine Network Adapter of Your VirtualBox Sandbox (Bridged Adapter, IP:192.168.1.37). - I edited the hosts file (on my computer, windows) and assigned the domain sandbox-hdp.hortonworks.com to the IP address of 192.168.1.37, but When I start all services via the Ambari UI, I receive the following error message:
safemode: Call From sandbox-hdp.hortonworks.com/172.18.0.2 to sandbox-hdp.hortonworks.com:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
2019-09-29 18:25:10,968 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://sandbox-hdp.hortonworks.com:8020 -safemode get | grep 'Safe mode is OFF'' returned 1. safemode: Call From sandbox-hdp.hortonworks.com/172.18.0.2 to sandbox-hdp.hortonworks.com:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused.
I do not know how to fix the error. Who knows, please just help, thank you!
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
- « Previous
-
- 1
- 2
- Next »