Member since
12-21-2020
91
Posts
8
Kudos Received
13
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1987 | 08-12-2021 05:16 AM | |
2210 | 06-29-2021 06:21 AM | |
2664 | 06-16-2021 07:15 AM | |
1879 | 06-14-2021 12:08 AM | |
6231 | 05-14-2021 06:03 AM |
04-09-2021
01:26 AM
Hello Everyone, Has anyone tested the Distcp editor available in HUE with CDP 7.1.5? Currently I see the below screen: In this, I'm not able to manually enter the source cluster path. As soon as I click on the source path text box, it opens a dialog box to select the path from the current cluster HDFS. Has anybody tested pulling data from another cluster? What config changes are required for getting this to work? Thanks, Megh
... View more
Labels:
03-22-2021
05:14 AM
1 Kudo
Added the same configuration in "HDFS Client Advanced Configuration Snippet (Safety Valve) for hdfs-site.xml" section and restarted the cluster and now the issue is resolved. Thanks, Megh
... View more
03-22-2021
01:38 AM
Hello Everyone, We have recently deployed CDP. I need to fetch data from one of my other cluster (HDFS HA-Enabled) to this cluster using distcp. For configuring the remote nameservice in my CDP cluster, I followed the steps given in this link and added the configuration properties to "HDFS Service Advanced Configuration Snippet (Safety Valve) for hdfs-site.xml" section of HDFS configuration in Cloudera Manager. After this, I restarted the services and tried to run the following command from one of my hosts in the cluster: hadoop fs -ls hdfs://<remote-nameservice-name>/ But it throws this error: 21/03/22 13:59:34 WARN fs.FileSystem: Failed to initialize fileystem hdfs://<remote-nameservice-name>/: java.lang.IllegalArgumentException: java.net.UnknownHostException: <remote-nameservice-name>
-ls: java.net.UnknownHostException: <remote-nameservice-name>
Usage: hadoop fs [generic options]
[-appendToFile <localsrc> ... <dst>]
[-cat [-ignoreCrc] <src> ...]
[-checksum <src> ...]
[-chgrp [-R] GROUP PATH...]
[-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
[-chown [-R] [OWNER][:[GROUP]] PATH...]
[-copyFromLocal [-f] [-p] [-l] [-d] [-t <thread count>] <localsrc> ... <dst>]
[-copyToLocal [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
[-count [-q] [-h] [-v] [-t [<storage type>]] [-u] [-x] [-e] <path> ...]
[-cp [-f] [-p | -p[topax]] [-d] <src> ... <dst>]
[-createSnapshot <snapshotDir> [<snapshotName>]]
[-deleteSnapshot <snapshotDir> <snapshotName>]
[-df [-h] [<path> ...]]
[-du [-s] [-h] [-v] [-x] <path> ...]
[-expunge [-immediate]]
[-find <path> ... <expression> ...]
[-get [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
[-getfacl [-R] <path>]
[-getfattr [-R] {-n name | -d} [-e en] <path>]
[-getmerge [-nl] [-skip-empty-file] <src> <localdst>]
[-head <file>]
[-help [cmd ...]]
[-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [-e] [<path> ...]]
[-mkdir [-p] <path> ...]
[-moveFromLocal [-f] [-p] [-l] [-d] <localsrc> ... <dst>]
[-moveToLocal <src> <localdst>]
[-mv <src> ... <dst>]
[-put [-f] [-p] [-l] [-d] [-t <thread count>] <localsrc> ... <dst>]
[-renameSnapshot <snapshotDir> <oldName> <newName>]
[-rm [-f] [-r|-R] [-skipTrash] [-safely] <src> ...]
[-rmdir [--ignore-fail-on-non-empty] <dir> ...]
[-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]]
[-setfattr {-n name [-v value] | -x name} <path>]
[-setrep [-R] [-w] <rep> <path> ...]
[-stat [format] <path> ...]
[-tail [-f] [-s <sleep interval>] <file>]
[-test -[defsz] <path>]
[-text [-ignoreCrc] <src> ...]
[-touch [-a] [-m] [-t TIMESTAMP ] [-c] <path> ...]
[-touchz <path> ...]
[-truncate [-w] <length> <path> ...]
[-usage [cmd ...]]
Generic options supported are:
-conf <configuration file> specify an application configuration file
-D <property=value> define a value for a given property
-fs <file:///|hdfs://namenode:port> specify default filesystem URL to use, overrides 'fs.defaultFS' property from configurations.
-jt <local|resourcemanager:port> specify a ResourceManager
-files <file1,...> specify a comma-separated list of files to be copied to the map reduce cluster
-libjars <jar1,...> specify a comma-separated list of jar files to be included in the classpath
-archives <archive1,...> specify a comma-separated list of archives to be unarchived on the compute machines
The general command line syntax is:
command [genericOptions] [commandOptions]
Usage: hadoop fs [generic options] -ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [-e] [<path> ...] I checked the hdfs-site.xml files on the host and it doesn't have the updated config. What might be the reason behind this? Why does Cloudera manager not manage the config change in the back end? . Thanks, Megh
... View more
Labels:
03-10-2021
05:21 AM
Hello @Atahar , The Vulnerability ID is "http-options-method-enabled". I Don't want to disable HTTP and enable HTTPS, I want to disable "HTTP Options Method". Thanks, Megh
... View more
03-09-2021
02:16 AM
Hi @Atahar , Thanks for your reply. I'm actually looking for a property to disable HTTP options method as this is being flagged as a vulnerability by my internal Security team. Thanks, Megh
... View more
02-25-2021
02:06 AM
Hello Everyone, Is there a reference document for exhaustive list of properties that can be set within ambari.properties? I would like to check whether Ambari has an option for disabling HTTP OPTIONS method. Thanks, Megh
... View more
Labels:
02-03-2021
03:02 AM
Hi @GangWar , On the node this issue was occuring, somehow the default jdk folder name was java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64, so I assumed that it is 1.8.0.161, but after your suggestion I went into the node and issued "java -version" and to my surprise it was indeed openjdk version "1.8.0_252". Following your suggestion has fixed the issue for me. Thanks a lot! Megh
... View more
02-02-2021
10:50 PM
Hello Everyone, I've recently enabled Kerberos in my cluster and since then one of my datanodes is not able to connect to the namenode: I see these entries in the namenode logs: 2021-02-03 12:06:15,699 INFO ipc.Server (Server.java:saslProcess(1573)) - Auth successful for $4E8100-MH1MCLUV65LO@<Realm-Name> (auth:KERBEROS)
2021-02-03 12:06:15,700 INFO ipc.Server (Server.java:authorizeConnection(2235)) - Connection from <datanode-ip>:42328 for protocol org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol is unauthorized for user dn/<datanode-hostname>@<Realm-Name> (auth:PROXY) via $4E8100-MH1MCLUV65LO@<Realm-Name> (auth:KERBEROS)
2021-02-03 12:06:15,700 INFO ipc.Server (Server.java:doRead(1006)) - Socket Reader #1 for port 8020: readAndProcess from client <datanode-ip> threw exception [org.apache.hadoop.security.authorize.AuthorizationException: User: $4E8100-MH1MCLUV65LO@<Realm-Name> is not allowed to impersonate dn/<datanode-hostname>@<Realm-Name>] From the datanode logs: 2021-02-03 12:14:33,806 WARN datanode.DataNode (BPServiceActor.java:retrieveNamespaceInfo(225)) - Problem connecting to server: <namenode-hostname>/<namenode-ip>:8020 I've ensured that the hostnames are in lowercase and consistent in all the nodes. Telnet is also happening from the datanode to namenode hostname on 8020 port. Regenerating Keytabs and restarting everything also didn't work. Any other areas to look into? Thanks, Megh
... View more
Labels:
02-02-2021
12:07 AM
Fixed this by changing my hostnames to lowercase. Earlier the hostnames were mixed case and researching online I found that Kerberos expects the hostnames to be lowercase and Realm name to be uppercase. changing the hostnames to all lowercase resolved the issue. Thanks, Megh
... View more
02-01-2021
04:03 AM
Figured out the keytab error. I had changed the encryption types and it was resulting in this error. I reverted those changes in the Kerberos configuration and it came clean. But after the entire process, now the services are not coming up. Firstly, it is getting stuck on starting the namenode, with safe mode not getting turned off automatically. If I leave the namenode manually, then it is showing the below error: stderr:
2021-02-01 17:18:32,891 - The NameNode is still in Safemode. Please be careful with commands that need Safemode OFF.
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 367, in <module>
NameNode().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute
method(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 850, in restart
self.start(env, upgrade_type=upgrade_type)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 100, in start
upgrade_suspended=params.upgrade_suspended, env=env)
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 226, in namenode
create_hdfs_directories()
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 293, in create_hdfs_directories
mode=0777,
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 555, in action_create_on_execute
self.action_delayed("create")
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 552, in action_delayed
self.get_hdfs_resource_executor().action_delayed(action_name, self)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 279, in action_delayed
self._assert_valid()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 238, in _assert_valid
self.target_status = self._get_file_status(target)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 381, in _get_file_status
list_status = self.util.run_command(target, 'GETFILESTATUS', method='GET', ignore_status_codes=['404'], assertable_result=False)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 199, in run_command
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w '%{http_code}' -X GET --negotiate -u : 'http://<namenode-hostname>:50070/webhdfs/v1/tmp?op=GETFILESTATUS&user.name=hdfs'' returned status_code=403.
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"/>
<title>Error 403 org.apache.hadoop.security.authentication.client.AuthenticationException</title>
</head>
<body><h2>HTTP ERROR 403</h2>
<p>Problem accessing /webhdfs/v1/tmp. Reason:
<pre> org.apache.hadoop.security.authentication.client.AuthenticationException</pre></p><hr /><i><small>Powered by Jetty://</small></i><br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
</body>
</html>
stdout:
2021-02-01 16:55:36,478 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2021-02-01 16:55:36,639 - Stack Feature Version Info: stack_version=2.6, version=2.6.1.0-129, current_cluster_version=2.6.1.0-129 -> 2.6.1.0-129
2021-02-01 16:55:36,645 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
User Group mapping (user_group) is missing in the hostLevelParams
2021-02-01 16:55:36,646 - Group['kms'] {}
2021-02-01 16:55:36,648 - Group['livy'] {}
2021-02-01 16:55:36,648 - Group['spark'] {}
2021-02-01 16:55:36,648 - Group['ranger'] {}
2021-02-01 16:55:36,648 - Group['hadoop'] {}
2021-02-01 16:55:36,648 - Group['users'] {}
2021-02-01 16:55:36,648 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2021-02-01 16:55:36,649 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2021-02-01 16:55:36,650 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2021-02-01 16:55:36,650 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2021-02-01 16:55:36,651 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2021-02-01 16:55:36,651 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'ranger']}
2021-02-01 16:55:36,652 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2021-02-01 16:55:36,652 - User['kms'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2021-02-01 16:55:36,653 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2021-02-01 16:55:36,653 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2021-02-01 16:55:36,654 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2021-02-01 16:55:36,654 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2021-02-01 16:55:36,655 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2021-02-01 16:55:36,655 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2021-02-01 16:55:36,656 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2021-02-01 16:55:36,656 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2021-02-01 16:55:36,657 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2021-02-01 16:55:36,657 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2021-02-01 16:55:36,659 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2021-02-01 16:55:36,667 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2021-02-01 16:55:36,667 - Group['hdfs'] {}
2021-02-01 16:55:36,667 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2021-02-01 16:55:36,668 - FS Type:
2021-02-01 16:55:36,668 - Directory['/etc/hadoop'] {'mode': 0755}
2021-02-01 16:55:36,679 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'root', 'group': 'hadoop'}
2021-02-01 16:55:36,680 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2021-02-01 16:55:36,696 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2021-02-01 16:55:36,706 - Skipping Execute[('setenforce', '0')] due to not_if
2021-02-01 16:55:36,707 - Directory['/hadoop/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2021-02-01 16:55:36,709 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2021-02-01 16:55:36,709 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2021-02-01 16:55:36,713 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'root'}
2021-02-01 16:55:36,714 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'root'}
2021-02-01 16:55:36,718 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2021-02-01 16:55:36,726 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2021-02-01 16:55:36,726 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2021-02-01 16:55:36,727 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2021-02-01 16:55:36,730 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2021-02-01 16:55:36,736 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2021-02-01 16:55:36,991 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2021-02-01 16:55:36,992 - Stack Feature Version Info: stack_version=2.6, version=2.6.1.0-129, current_cluster_version=2.6.1.0-129 -> 2.6.1.0-129
2021-02-01 16:55:37,011 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2021-02-01 16:55:37,023 - checked_call['rpm -q --queryformat '%{version}-%{release}' hdp-select | sed -e 's/\.el[0-9]//g''] {'stderr': -1}
2021-02-01 16:55:37,063 - checked_call returned (0, '2.6.1.0-129', '')
2021-02-01 16:55:37,071 - Execute['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf stop namenode''] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/current/hadoop-client/libexec'}, 'only_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'}
2021-02-01 16:55:42,273 - File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'] {'action': ['delete']}
2021-02-01 16:55:42,274 - Pid file /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid is empty or does not exist
2021-02-01 16:55:42,277 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2021-02-01 16:55:42,281 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2021-02-01 16:55:42,281 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2021-02-01 16:55:42,288 - Generating config: /usr/hdp/current/hadoop-client/conf/hadoop-policy.xml
2021-02-01 16:55:42,288 - File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2021-02-01 16:55:42,295 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2021-02-01 16:55:42,301 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-client.xml
2021-02-01 16:55:42,301 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2021-02-01 16:55:42,306 - Directory['/usr/hdp/current/hadoop-client/conf/secure'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2021-02-01 16:55:42,306 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf/secure', 'configuration_attributes': {}, 'configurations': ...}
2021-02-01 16:55:42,312 - Generating config: /usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml
2021-02-01 16:55:42,313 - File['/usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2021-02-01 16:55:42,317 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2021-02-01 16:55:42,323 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-server.xml
2021-02-01 16:55:42,323 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2021-02-01 16:55:42,328 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true'}}, 'configurations': ...}
2021-02-01 16:55:42,334 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml
2021-02-01 16:55:42,334 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2021-02-01 16:55:42,372 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'hdfs', 'configurations': ...}
2021-02-01 16:55:42,378 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml
2021-02-01 16:55:42,378 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2021-02-01 16:55:42,399 - File['/usr/hdp/current/hadoop-client/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'root'}
2021-02-01 16:55:42,403 - Directory['/hadoop/hadoop/hdfs/namenode'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2021-02-01 16:55:42,403 - Skipping setting up secure ZNode ACL for HFDS as it's supported only for NameNode HA mode.
2021-02-01 16:55:42,406 - Called service start with upgrade_type: None
2021-02-01 16:55:42,406 - Ranger Hdfs plugin is not enabled
2021-02-01 16:55:42,407 - File['/etc/hadoop/conf/dfs.exclude'] {'owner': 'hdfs', 'content': Template('exclude_hosts_list.j2'), 'group': 'hadoop'}
2021-02-01 16:55:42,407 - /hadoop/hadoop/hdfs/namenode/namenode-formatted/ exists. Namenode DFS already formatted
2021-02-01 16:55:42,407 - Directory['/hadoop/hadoop/hdfs/namenode/namenode-formatted/'] {'create_parents': True}
2021-02-01 16:55:42,407 - Options for start command are:
2021-02-01 16:55:42,408 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755}
2021-02-01 16:55:42,408 - Changing owner for /var/run/hadoop from 0 to hdfs
2021-02-01 16:55:42,408 - Changing group for /var/run/hadoop from 0 to hadoop
2021-02-01 16:55:42,408 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2021-02-01 16:55:42,408 - Directory['/hadoop/log/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2021-02-01 16:55:42,409 - File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'] {'action': ['delete'], 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'}
2021-02-01 16:55:42,417 - Execute['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start namenode''] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/current/hadoop-client/libexec'}, 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'}
2021-02-01 16:55:46,632 - Execute['/usr/bin/kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-<clustername>@<REALM>'] {'user': 'hdfs'}
2021-02-01 16:55:46,843 - Waiting for this NameNode to leave Safemode due to the following conditions: HA: False, isActive: True, upgradeType: None
2021-02-01 16:55:46,844 - Waiting up to 19 minutes for the NameNode to leave Safemode...
2021-02-01 16:55:46,844 - Execute['/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF''] {'logoutput': True, 'tries': 115, 'user': 'hdfs', 'try_sleep': 10}
safemode: Call From <namenode-hostname>/<namenode-ip> to <namenode-hostname>:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
2021-02-01 16:55:48,554 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1. safemode: Call From <namenode-hostname>/<namenode-ip> to <namenode-hostname>:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
2021-02-01 16:56:00,609 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 16:56:12,594 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 16:56:24,589 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 16:56:36,590 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 16:56:48,592 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 16:57:00,622 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 16:57:12,563 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 16:57:24,555 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 16:57:36,546 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 16:57:48,553 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 16:58:00,554 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 16:58:12,533 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 16:58:24,552 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 16:58:36,530 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 16:58:48,501 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 16:59:00,476 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 16:59:12,428 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 16:59:24,440 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 16:59:36,419 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 16:59:48,415 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:00:00,402 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:00:12,355 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:00:24,370 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:00:36,342 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:00:48,326 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:01:00,285 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:01:12,241 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:01:24,250 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:01:36,207 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:01:48,155 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:02:00,095 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:02:12,064 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:02:24,063 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:02:36,028 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:02:48,010 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:03:00,007 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:03:11,977 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:03:23,991 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:03:35,968 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:03:47,906 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:03:59,916 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:04:11,873 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:04:23,858 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:04:35,835 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:04:47,801 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:04:59,781 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:05:11,743 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:05:23,700 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:05:35,648 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:05:47,569 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:05:59,518 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:06:11,471 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:06:23,416 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:06:35,407 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:06:47,402 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:06:59,419 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:07:11,393 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:07:23,354 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:07:35,310 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:07:47,282 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:07:59,247 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:08:11,204 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:08:23,163 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:08:35,118 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:08:47,103 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:08:59,117 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:09:11,084 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:09:23,083 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:09:35,067 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:09:47,042 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:09:59,028 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:10:10,987 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:10:22,925 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:10:34,876 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:10:46,824 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:10:58,815 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:11:10,771 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:11:22,711 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:11:34,670 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:11:46,624 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:11:58,620 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:12:10,573 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:12:22,509 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:12:34,444 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:12:46,402 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:12:58,363 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:13:10,327 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:13:22,258 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:13:34,255 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:13:46,189 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:13:58,155 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:14:10,092 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:14:22,046 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:14:33,992 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:14:45,939 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:14:57,904 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:15:09,874 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:15:21,829 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:15:33,784 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:15:45,706 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:15:57,641 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:16:09,595 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:16:21,527 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:16:33,455 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:16:45,379 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:16:57,332 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:17:09,267 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:17:21,205 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:17:33,148 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:17:45,090 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:17:57,066 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:18:08,997 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:18:20,953 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://<namenode-hostname>:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2021-02-01 17:18:32,891 - The NameNode is still in Safemode. Please be careful with commands that need Safemode OFF.
2021-02-01 17:18:32,891 - HdfsResource['/tmp'] {'security_enabled': True, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 'hdfs://<namenode-hostname>:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'hdfs-<clustername>@<REALM>', 'user': 'hdfs', 'owner': 'hdfs', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/apps/hive/warehouse', u'/apps/falcon', u'/mr-history/done', u'/app-logs', u'/tmp'], 'mode': 0777}
2021-02-01 17:18:32,893 - Execute['/usr/bin/kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-<clustername>@<REALM>'] {'user': 'hdfs'}
2021-02-01 17:18:33,103 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET --negotiate -u : '"'"'http://<namenode-hostname>:50070/webhdfs/v1/tmp?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmp3JAW_l 2>/tmp/tmpksEFBR''] {'logoutput': None, 'quiet': False}
2021-02-01 17:18:33,291 - call returned (0, '')
Command failed after 1 tries Anybody can make sense of this? Thanks, Megh
... View more
- « Previous
- Next »