Member since
07-13-2018
8
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4361 | 01-06-2019 11:34 PM |
01-06-2019
11:34 PM
Turns out this is all due to PUBLIC-IP -> Public FQDN usage in /etc/hostname When I changed public ip to private ip, everything was OK. Check this post https://community.hortonworks.com/questions/83429/failed-to-startup-hdfs-namenode.html So, same thing happens on AWS EC2, RHEL 7
... View more
01-06-2019
11:32 PM
I would like to say that same thing happens on RHEL 7 on AWS (EC2). It costed me both money and time! Thanks for this, took me quite some time to find it. Sorry for bumping old post!
... View more
01-06-2019
10:35 AM
Hi @Jay Kumar SenSharma, Yesterday I tried setting the port for namenode http port to 50071 ... and now the error is that port 50071 is in use :). So i really doubt this error is being reported correctly. In addition, I noticed that even root cannot execute the command with syntax you posted, so it might be a wrong check after all. Thx
... View more
01-03-2019
01:54 PM
Wow, that's a great idea! (@Jay Kumar SenSharma) It's interesting however that your command didn't work, but I was able to bind the port without host info... Honestly, didn't even know you can enter the host. (see screenshot screenshot-from-2019-01-03-14-47-02.png - also on screenshot it can be seen how one of the clients opened connection via this port) Also attaching nslookup/ipconfig output (screenshot-from-2019-01-03-14-54-07.png) What does this mean now? Thx!
... View more
01-02-2019
10:57 PM
Hi, I'm making a sandbox on AWS. I have 3 instances of t2.medium (2 vCPUs, 4GB RAM), with 32 GB disks. Everything was set up to pass the installation. Instances are running RHEL 7.6 (AWS image). I didn't touch much - just enabled passwordless login for root, added mysql to "master" node for hive. I'm getting no warnings or issues during installation. After installation, during 'startup phase' I get the following log. The "only" error I see is related to port 50070 being in use, but this is not the case - at least not before or after installation (on all three hosts actually). Obviously, almost nothing else starts afterwards. Here's the full log: stderr:
<script id="metamorph-23723-start" type="text/x-placeholder"></script>Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 361, in <module>
NameNode().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 375, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 99, in start
upgrade_suspended=params.upgrade_suspended, env=env)
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 175, in namenode
create_log_dir=True
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py", line 276, in service
Execute(daemon_cmd, not_if=process_id_exists_command, environment=hadoop_env_exports)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 262, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/2.6.4.0-91/hadoop/sbin/hadoop-daemon.sh --config /usr/hdp/2.6.4.0-91/hadoop/conf start namenode'' returned 1. starting namenode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-namenode-ec2-34-211-154-113.us-west-2.compute.amazonaws.com.out<script id="metamorph-23723-end" type="text/x-placeholder"></script>
stdout:
<script id="metamorph-23725-start" type="text/x-placeholder"></script>2019-01-02 22:21:04,481 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.4.0-91 -> 2.6.4.0-91
2019-01-02 22:21:04,497 - Using hadoop conf dir: /usr/hdp/2.6.4.0-91/hadoop/conf
2019-01-02 22:21:04,653 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.4.0-91 -> 2.6.4.0-91
2019-01-02 22:21:04,658 - Using hadoop conf dir: /usr/hdp/2.6.4.0-91/hadoop/conf
2019-01-02 22:21:04,659 - Group['livy'] {}
2019-01-02 22:21:04,661 - Group['spark'] {}
2019-01-02 22:21:04,661 - Group['hdfs'] {}
2019-01-02 22:21:04,661 - Group['hadoop'] {}
2019-01-02 22:21:04,661 - Group['users'] {}
2019-01-02 22:21:04,662 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2019-01-02 22:21:04,663 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2019-01-02 22:21:04,663 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2019-01-02 22:21:04,664 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2019-01-02 22:21:04,665 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2019-01-02 22:21:04,665 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2019-01-02 22:21:04,666 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2019-01-02 22:21:04,667 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None}
2019-01-02 22:21:04,668 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2019-01-02 22:21:04,668 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2019-01-02 22:21:04,669 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2019-01-02 22:21:04,670 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-01-02 22:21:04,671 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2019-01-02 22:21:04,676 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2019-01-02 22:21:04,677 - Group['hdfs'] {}
2019-01-02 22:21:04,677 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', u'hdfs']}
2019-01-02 22:21:04,678 - FS Type:
2019-01-02 22:21:04,678 - Directory['/etc/hadoop'] {'mode': 0755}
2019-01-02 22:21:04,692 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2019-01-02 22:21:04,693 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2019-01-02 22:21:04,708 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2019-01-02 22:21:04,715 - Skipping Execute[('setenforce', '0')] due to not_if
2019-01-02 22:21:04,715 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2019-01-02 22:21:04,717 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2019-01-02 22:21:04,718 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2019-01-02 22:21:04,721 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2019-01-02 22:21:04,723 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2019-01-02 22:21:04,729 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2019-01-02 22:21:04,738 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2019-01-02 22:21:04,739 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2019-01-02 22:21:04,740 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2019-01-02 22:21:04,744 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2019-01-02 22:21:04,748 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2019-01-02 22:21:04,985 - Using hadoop conf dir: /usr/hdp/2.6.4.0-91/hadoop/conf
2019-01-02 22:21:04,986 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.4.0-91 -> 2.6.4.0-91
2019-01-02 22:21:05,006 - Using hadoop conf dir: /usr/hdp/2.6.4.0-91/hadoop/conf
2019-01-02 22:21:05,021 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2019-01-02 22:21:05,026 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2019-01-02 22:21:05,026 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.4.0-91/hadoop/conf', 'configuration_attributes': {}, 'configurations': ...}
2019-01-02 22:21:05,035 - Generating config: /usr/hdp/2.6.4.0-91/hadoop/conf/hadoop-policy.xml
2019-01-02 22:21:05,035 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2019-01-02 22:21:05,043 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.4.0-91/hadoop/conf', 'configuration_attributes': {}, 'configurations': ...}
2019-01-02 22:21:05,050 - Generating config: /usr/hdp/2.6.4.0-91/hadoop/conf/ssl-client.xml
2019-01-02 22:21:05,051 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2019-01-02 22:21:05,056 - Directory['/usr/hdp/2.6.4.0-91/hadoop/conf/secure'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2019-01-02 22:21:05,057 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.4.0-91/hadoop/conf/secure', 'configuration_attributes': {}, 'configurations': ...}
2019-01-02 22:21:05,064 - Generating config: /usr/hdp/2.6.4.0-91/hadoop/conf/secure/ssl-client.xml
2019-01-02 22:21:05,065 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2019-01-02 22:21:05,070 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.4.0-91/hadoop/conf', 'configuration_attributes': {}, 'configurations': ...}
2019-01-02 22:21:05,077 - Generating config: /usr/hdp/2.6.4.0-91/hadoop/conf/ssl-server.xml
2019-01-02 22:21:05,077 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2019-01-02 22:21:05,084 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.4.0-91/hadoop/conf', 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true'}}, 'configurations': ...}
2019-01-02 22:21:05,091 - Generating config: /usr/hdp/2.6.4.0-91/hadoop/conf/hdfs-site.xml
2019-01-02 22:21:05,091 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2019-01-02 22:21:05,132 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.4.0-91/hadoop/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'hdfs', 'configurations': ...}
2019-01-02 22:21:05,139 - Generating config: /usr/hdp/2.6.4.0-91/hadoop/conf/core-site.xml
2019-01-02 22:21:05,139 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-01-02 22:21:05,160 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'hdfs'}
2019-01-02 22:21:05,160 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.4.0-91 -> 2.6.4.0-91
2019-01-02 22:21:05,166 - Directory['/hadoop/hdfs/namenode'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2019-01-02 22:21:05,166 - Skipping setting up secure ZNode ACL for HFDS as it's supported only for NameNode HA mode.
2019-01-02 22:21:05,169 - Called service start with upgrade_type: None
2019-01-02 22:21:05,169 - Ranger Hdfs plugin is not enabled
2019-01-02 22:21:05,171 - File['/etc/hadoop/conf/dfs.exclude'] {'owner': 'hdfs', 'content': Template('exclude_hosts_list.j2'), 'group': 'hadoop'}
2019-01-02 22:21:05,172 - Writing File['/etc/hadoop/conf/dfs.exclude'] because it doesn't exist
2019-01-02 22:21:05,172 - Changing owner for /etc/hadoop/conf/dfs.exclude from 0 to hdfs
2019-01-02 22:21:05,172 - Changing group for /etc/hadoop/conf/dfs.exclude from 0 to hadoop
2019-01-02 22:21:05,172 - call[('ls', u'/hadoop/hdfs/namenode')] {}
2019-01-02 22:21:05,177 - call returned (0, '')
2019-01-02 22:21:05,177 - Execute['ls /hadoop/hdfs/namenode | wc -l | grep -q ^0$'] {}
2019-01-02 22:21:05,183 - Execute['hdfs --config /usr/hdp/2.6.4.0-91/hadoop/conf namenode -format -nonInteractive'] {'logoutput': True, 'path': ['/usr/hdp/2.6.4.0-91/hadoop/bin'], 'user': 'hdfs'}
Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file /var/log/hadoop/hdfs/gc.log-201901022221 due to No such file or directory
19/01/02 22:21:06 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: user = hdfs
STARTUP_MSG: host = ec2-34-211-154-113.us-west-2.compute.amazonaws.com/34.211.154.113
STARTUP_MSG: args = [-format, -nonInteractive]
STARTUP_MSG: version = 2.7.3.2.6.4.0-91
STARTUP_MSG: classpath = /usr/hdp/2.6.4.0-91/hadoop/conf:/usr/hdp/2.6.4.0-91/hadoop/lib/ojdbc6.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/jackson-annotations-2.2.3.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/ranger-hdfs-plugin-shim-0.7.0.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/jackson-core-2.2.3.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/ranger-plugin-classloader-0.7.0.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/jcip-annotations-1.0.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/ranger-yarn-plugin-shim-0.7.0.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/xmlenc-0.52.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/activation-1.1.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/jettison-1.1.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/jackson-databind-2.2.3.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/jetty-sslengine-6.1.26.hwx.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/api-util-1.0.0-M20.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/asm-3.2.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/xz-1.0.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/avro-1.7.4.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/aws-java-sdk-core-1.10.6.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/aws-java-sdk-kms-1.10.6.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/joda-time-2.9.4.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/aws-java-sdk-s3-1.10.6.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/azure-keyvault-core-0.8.0.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/jsch-0.1.54.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/azure-storage-5.4.0.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/json-smart-1.1.1.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/commons-cli-1.2.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/jsp-api-2.1.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/commons-codec-1.4.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/jersey-json-1.9.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/commons-collections-3.2.2.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/jsr305-3.0.0.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/commons-compress-1.4.1.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/jersey-server-1.9.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/commons-configuration-1.6.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/junit-4.11.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/commons-digester-1.8.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/commons-io-2.4.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/log4j-1.2.17.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/commons-lang-2.6.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/mockito-all-1.8.5.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/commons-lang3-3.4.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/nimbus-jose-jwt-3.9.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/commons-logging-1.1.3.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/netty-3.6.2.Final.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/commons-math3-3.1.1.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/commons-net-3.1.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/paranamer-2.3.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/curator-client-2.7.1.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/curator-framework-2.7.1.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/servlet-api-2.5.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/curator-recipes-2.7.1.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/gson-2.2.4.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/guava-11.0.2.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/slf4j-api-1.7.10.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/hamcrest-core-1.3.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/jets3t-0.9.0.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/slf4j-log4j12-1.7.10.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/httpclient-4.5.2.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/httpcore-4.4.4.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/jackson-xc-1.9.13.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/jaxb-api-2.2.2.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/stax-api-1.0-2.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/jersey-core-1.9.jar:/usr/hdp/2.6.4.0-91/hadoop/lib/zookeeper-3.4.6.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop/.//azure-data-lake-store-sdk-2.1.4.jar:/usr/hdp/2.6.4.0-91/hadoop/.//hadoop-annotations-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop/.//hadoop-annotations.jar:/usr/hdp/2.6.4.0-91/hadoop/.//hadoop-auth-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop/.//hadoop-auth.jar:/usr/hdp/2.6.4.0-91/hadoop/.//hadoop-aws-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop/.//hadoop-aws.jar:/usr/hdp/2.6.4.0-91/hadoop/.//hadoop-azure-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop/.//hadoop-azure-datalake-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop/.//hadoop-azure-datalake.jar:/usr/hdp/2.6.4.0-91/hadoop/.//hadoop-azure.jar:/usr/hdp/2.6.4.0-91/hadoop/.//hadoop-common-2.7.3.2.6.4.0-91-tests.jar:/usr/hdp/2.6.4.0-91/hadoop/.//hadoop-common-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop/.//hadoop-common-tests.jar:/usr/hdp/2.6.4.0-91/hadoop/.//hadoop-common.jar:/usr/hdp/2.6.4.0-91/hadoop/.//hadoop-nfs-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop/.//hadoop-nfs.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/./:/usr/hdp/2.6.4.0-91/hadoop-hdfs/lib/asm-3.2.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/lib/jackson-annotations-2.2.3.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/lib/jackson-core-2.2.3.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/lib/jackson-databind-2.2.3.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/lib/netty-3.6.2.Final.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/lib/netty-all-4.0.52.Final.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/lib/okhttp-2.4.0.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/lib/okio-1.4.0.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/lib/xml-apis-1.3.04.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/.//hadoop-hdfs-2.7.3.2.6.4.0-91-tests.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/.//hadoop-hdfs-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/.//hadoop-hdfs-nfs-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/.//hadoop-hdfs-tests.jar:/usr/hdp/2.6.4.0-91/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/activation-1.1.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/jsch-0.1.54.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/jetty-sslengine-6.1.26.hwx.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/api-util-1.0.0-M20.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/asm-3.2.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/avro-1.7.4.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/azure-keyvault-core-0.8.0.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/azure-storage-5.4.0.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/json-smart-1.1.1.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/javassist-3.18.1-GA.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/jsp-api-2.1.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/javax.inject-1.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/commons-collections-3.2.2.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/jsr305-3.0.0.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/commons-configuration-1.6.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/commons-digester-1.8.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/commons-io-2.4.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/metrics-core-3.0.1.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/commons-lang3-3.4.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/netty-3.6.2.Final.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/nimbus-jose-jwt-3.9.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/commons-math3-3.1.1.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/commons-net-3.1.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/objenesis-2.1.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/curator-client-2.7.1.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/paranamer-2.3.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/curator-framework-2.7.1.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/curator-recipes-2.7.1.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/fst-2.24.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/gson-2.2.4.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/guava-11.0.2.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/guice-3.0.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/httpclient-4.5.2.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/httpcore-4.4.4.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/jackson-annotations-2.2.3.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/jackson-core-2.2.3.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/xmlenc-0.52.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/xz-1.0.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/jackson-databind-2.2.3.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/zookeeper-3.4.6.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/jets3t-0.9.0.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/zookeeper-3.4.6.2.6.4.0-91-tests.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/jackson-xc-1.9.13.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/jcip-annotations-1.0.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/lib/jettison-1.1.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/.//hadoop-yarn-api-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/.//hadoop-yarn-client-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/.//hadoop-yarn-common-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/.//hadoop-yarn-registry-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/.//hadoop-yarn-registry.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/.//hadoop-yarn-server-common-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/.//hadoop-yarn-server-tests-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/lib/asm-3.2.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/lib/avro-1.7.4.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/lib/guice-3.0.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/lib/junit-4.11.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/lib/netty-3.6.2.Final.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/lib/xz-1.0.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//activation-1.1.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-rumen-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-gridmix-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-sls-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//api-util-1.0.0-M20.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//junit-4.11.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//asm-3.2.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//avro-1.7.4.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//azure-keyvault-core-0.8.0.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-sls.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//commons-beanutils-1.7.0.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//commons-beanutils-core-1.8.0.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//jcip-annotations-1.0.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hamcrest-core-1.3.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//commons-collections-3.2.2.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//commons-compress-1.4.1.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-tests.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//httpcore-4.4.4.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//htrace-core-3.1.0-incubating.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//httpclient-4.5.2.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//jackson-core-asl-1.9.13.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//commons-lang3-3.4.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//jackson-jaxrs-1.9.13.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//curator-client-2.7.1.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//jackson-xc-1.9.13.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//curator-framework-2.7.1.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//jetty-sslengine-6.1.26.hwx.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//curator-recipes-2.7.1.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//gson-2.2.4.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-ant-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//jettison-1.1.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-ant.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-archives-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//jetty-6.1.26.hwx.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-archives.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-auth-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//jetty-util-6.1.26.hwx.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-auth.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-datajoin-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//jsr305-3.0.0.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-streaming-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-distcp-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//jsch-0.1.54.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-openstack-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-extras-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//json-smart-1.1.1.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-extras.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-openstack.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//xz-1.0.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.7.3.2.6.4.0-91-tests.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//metrics-core-3.0.1.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//mockito-all-1.8.5.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//netty-3.6.2.Final.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//nimbus-jose-jwt-3.9.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//okhttp-2.4.0.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//okio-1.4.0.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//snappy-java-1.0.4.1.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/hdp/2.6.4.0-91/hadoop-mapreduce/.//zookeeper-3.4.6.2.6.4.0-91.jar::mysql-connector-java.jar:/usr/hdp/2.6.4.0-91/tez/tez-api-0.7.0.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/tez/tez-common-0.7.0.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/tez/tez-dag-0.7.0.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/tez/tez-examples-0.7.0.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/tez/tez-history-parser-0.7.0.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/tez/tez-job-analyzer-0.7.0.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/tez/tez-mapreduce-0.7.0.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/tez/tez-runtime-internals-0.7.0.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/tez/tez-runtime-library-0.7.0.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/tez/tez-tests-0.7.0.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/tez/tez-yarn-timeline-cache-plugin-0.7.0.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/tez/tez-yarn-timeline-history-0.7.0.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/tez/tez-yarn-timeline-history-with-acls-0.7.0.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/tez/tez-yarn-timeline-history-with-fs-0.7.0.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/tez/lib/azure-data-lake-store-sdk-2.1.4.jar:/usr/hdp/2.6.4.0-91/tez/lib/commons-cli-1.2.jar:/usr/hdp/2.6.4.0-91/tez/lib/commons-codec-1.4.jar:/usr/hdp/2.6.4.0-91/tez/lib/commons-collections-3.2.2.jar:/usr/hdp/2.6.4.0-91/tez/lib/commons-collections4-4.1.jar:/usr/hdp/2.6.4.0-91/tez/lib/commons-io-2.4.jar:/usr/hdp/2.6.4.0-91/tez/lib/commons-lang-2.6.jar:/usr/hdp/2.6.4.0-91/tez/lib/commons-math3-3.1.1.jar:/usr/hdp/2.6.4.0-91/tez/lib/guava-11.0.2.jar:/usr/hdp/2.6.4.0-91/tez/lib/hadoop-annotations-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/tez/lib/hadoop-aws-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/tez/lib/hadoop-azure-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/tez/lib/hadoop-azure-datalake-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/tez/lib/hadoop-mapreduce-client-common-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/tez/lib/hadoop-mapreduce-client-core-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/tez/lib/hadoop-yarn-server-timeline-pluginstorage-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/tez/lib/hadoop-yarn-server-web-proxy-2.7.3.2.6.4.0-91.jar:/usr/hdp/2.6.4.0-91/tez/lib/jersey-client-1.9.jar:/usr/hdp/2.6.4.0-91/tez/lib/jersey-json-1.9.jar:/usr/hdp/2.6.4.0-91/tez/lib/jettison-1.3.4.jar:/usr/hdp/2.6.4.0-91/tez/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.6.4.0-91/tez/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.6.4.0-91/tez/lib/jsr305-2.0.3.jar:/usr/hdp/2.6.4.0-91/tez/lib/metrics-core-3.1.0.jar:/usr/hdp/2.6.4.0-91/tez/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.6.4.0-91/tez/lib/servlet-api-2.5.jar:/usr/hdp/2.6.4.0-91/tez/lib/slf4j-api-1.7.5.jar:/usr/hdp/2.6.4.0-91/tez/conf
STARTUP_MSG: build = git@github.com:hortonworks/hadoop.git -r a4b6e1c0e98b4488d38bcc0a241dcbe3538b1c4d; compiled by 'jenkins' on 2018-01-04T10:41Z
STARTUP_MSG: java = 1.8.0_112
************************************************************/
19/01/02 22:21:06 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
19/01/02 22:21:06 INFO namenode.NameNode: createNameNode [-format, -nonInteractive]
19/01/02 22:21:07 WARN common.Util: Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
19/01/02 22:21:07 WARN common.Util: Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
Formatting using clusterid: CID-773e8392-51fb-4229-8f02-264197a64702
19/01/02 22:21:07 WARN common.Storage: set restore failed storage to true
19/01/02 22:21:07 INFO namenode.FSEditLog: Edit logging is async:false
19/01/02 22:21:07 INFO namenode.FSNamesystem: No KeyProvider found.
19/01/02 22:21:07 INFO namenode.FSNamesystem: Enabling async auditlog
19/01/02 22:21:07 INFO namenode.FSNamesystem: fsLock is fair:false
19/01/02 22:21:07 INFO blockmanagement.HeartbeatManager: Setting heartbeat recheck interval to 30000 since dfs.namenode.stale.datanode.interval is less than dfs.namenode.heartbeat.recheck-interval
19/01/02 22:21:07 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
19/01/02 22:21:07 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
19/01/02 22:21:07 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
19/01/02 22:21:07 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:01:00:00.000
19/01/02 22:21:07 INFO blockmanagement.BlockManager: The block deletion will start around 2019 Jan 02 23:21:07
19/01/02 22:21:07 INFO util.GSet: Computing capacity for map BlocksMap
19/01/02 22:21:07 INFO util.GSet: VM type = 64-bit
19/01/02 22:21:07 INFO util.GSet: 2.0% max memory 1.5 GB = 30.3 MB
19/01/02 22:21:07 INFO util.GSet: capacity = 2^22 = 4194304 entries
19/01/02 22:21:07 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=true
19/01/02 22:21:07 INFO blockmanagement.BlockManager: dfs.block.access.key.update.interval=600 min(s), dfs.block.access.token.lifetime=600 min(s), dfs.encrypt.data.transfer.algorithm=null
19/01/02 22:21:07 INFO blockmanagement.BlockManager: defaultReplication = 3
19/01/02 22:21:07 INFO blockmanagement.BlockManager: maxReplication = 50
19/01/02 22:21:07 INFO blockmanagement.BlockManager: minReplication = 1
19/01/02 22:21:07 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
19/01/02 22:21:07 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
19/01/02 22:21:07 INFO blockmanagement.BlockManager: encryptDataTransfer = false
19/01/02 22:21:07 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
19/01/02 22:21:08 INFO namenode.FSNamesystem: fsOwner = hdfs (auth:SIMPLE)
19/01/02 22:21:08 INFO namenode.FSNamesystem: supergroup = hdfs
19/01/02 22:21:08 INFO namenode.FSNamesystem: isPermissionEnabled = true
19/01/02 22:21:08 INFO namenode.FSNamesystem: HA Enabled: false
19/01/02 22:21:08 INFO namenode.FSNamesystem: Append Enabled: true
19/01/02 22:21:08 INFO util.GSet: Computing capacity for map INodeMap
19/01/02 22:21:08 INFO util.GSet: VM type = 64-bit
19/01/02 22:21:08 INFO util.GSet: 1.0% max memory 1.5 GB = 15.2 MB
19/01/02 22:21:08 INFO util.GSet: capacity = 2^21 = 2097152 entries
19/01/02 22:21:08 INFO namenode.FSDirectory: ACLs enabled? false
19/01/02 22:21:08 INFO namenode.FSDirectory: XAttrs enabled? true
19/01/02 22:21:08 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
19/01/02 22:21:08 INFO namenode.NameNode: Caching file names occuring more than 10 times
19/01/02 22:21:08 INFO util.GSet: Computing capacity for map cachedBlocks
19/01/02 22:21:08 INFO util.GSet: VM type = 64-bit
19/01/02 22:21:08 INFO util.GSet: 0.25% max memory 1.5 GB = 3.8 MB
19/01/02 22:21:08 INFO util.GSet: capacity = 2^19 = 524288 entries
19/01/02 22:21:08 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 1.0
19/01/02 22:21:08 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
19/01/02 22:21:08 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
19/01/02 22:21:08 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
19/01/02 22:21:08 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
19/01/02 22:21:08 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
19/01/02 22:21:08 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
19/01/02 22:21:08 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
19/01/02 22:21:08 INFO util.GSet: Computing capacity for map NameNodeRetryCache
19/01/02 22:21:08 INFO util.GSet: VM type = 64-bit
19/01/02 22:21:08 INFO util.GSet: 0.029999999329447746% max memory 1.5 GB = 466.0 KB
19/01/02 22:21:08 INFO util.GSet: capacity = 2^16 = 65536 entries
19/01/02 22:21:08 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1207953999-34.211.154.113-1546467668088
19/01/02 22:21:08 INFO common.Storage: Storage directory /hadoop/hdfs/namenode has been successfully formatted.
19/01/02 22:21:08 INFO namenode.FSImageFormatProtobuf: Saving image file /hadoop/hdfs/namenode/current/fsimage.ckpt_0000000000000000000 using no compression
19/01/02 22:21:08 INFO namenode.FSImageFormatProtobuf: Image file /hadoop/hdfs/namenode/current/fsimage.ckpt_0000000000000000000 of size 306 bytes saved in 0 seconds.
19/01/02 22:21:08 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
19/01/02 22:21:08 INFO util.ExitUtil: Exiting with status 0
19/01/02 22:21:08 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ec2-34-211-154-113.us-west-2.compute.amazonaws.com/34.211.154.113
************************************************************/
2019-01-02 22:21:08,371 - Directory['/hadoop/hdfs/namenode/namenode-formatted/'] {'create_parents': True}
2019-01-02 22:21:08,371 - Creating directory Directory['/hadoop/hdfs/namenode/namenode-formatted/'] since it doesn't exist.
2019-01-02 22:21:08,371 - Options for start command are:
2019-01-02 22:21:08,372 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755}
2019-01-02 22:21:08,372 - Changing owner for /var/run/hadoop from 0 to hdfs
2019-01-02 22:21:08,372 - Changing group for /var/run/hadoop from 0 to hadoop
2019-01-02 22:21:08,372 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2019-01-02 22:21:08,373 - Creating directory Directory['/var/run/hadoop/hdfs'] since it doesn't exist.
2019-01-02 22:21:08,373 - Changing owner for /var/run/hadoop/hdfs from 0 to hdfs
2019-01-02 22:21:08,373 - Changing group for /var/run/hadoop/hdfs from 0 to hadoop
2019-01-02 22:21:08,374 - Directory['/var/log/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2019-01-02 22:21:08,374 - File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'] {'action': ['delete'], 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'}
2019-01-02 22:21:08,379 - Execute['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/2.6.4.0-91/hadoop/sbin/hadoop-daemon.sh --config /usr/hdp/2.6.4.0-91/hadoop/conf start namenode''] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/2.6.4.0-91/hadoop/libexec'}, 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'}
2019-01-02 22:21:12,473 - Execute['find /var/log/hadoop/hdfs -maxdepth 1 -type f -name '*' -exec echo '==> {} <==' \; -exec tail -n 40 {} \;'] {'logoutput': True, 'ignore_failures': True, 'user': 'hdfs'}
==> /var/log/hadoop/hdfs/hdfs-audit.log <==
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-ec2-34-211-154-113.us-west-2.compute.amazonaws.com.out <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals (-i) 14974
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201901022221 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 3878876k(401080k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1610612736 -XX:MaxHeapSize=1610612736 -XX:MaxNewSize=201326592 -XX:MaxTenuringThreshold=6 -XX:NewSize=201326592 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2019-01-02T22:21:10.466+0000: 1.969: [GC (Allocation Failure) 2019-01-02T22:21:10.466+0000: 1.969: [ParNew: 157312K->11719K(176960K), 0.0475908 secs] 157312K->11719K(1553216K), 0.0476927 secs] [Times: user=0.08 sys=0.01, real=0.05 secs]
Heap
par new generation total 176960K, used 44819K [0x00000000a0000000, 0x00000000ac000000, 0x00000000ac000000)
eden space 157312K, 21% used [0x00000000a0000000, 0x00000000a2053050, 0x00000000a99a0000)
from space 19648K, 59% used [0x00000000aacd0000, 0x00000000ab841c68, 0x00000000ac000000)
to space 19648K, 0% used [0x00000000a99a0000, 0x00000000a99a0000, 0x00000000aacd0000)
concurrent mark-sweep generation total 1376256K, used 0K [0x00000000ac000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 18250K, capacity 18612K, committed 18816K, reserved 1064960K
class space used 2248K, capacity 2360K, committed 2432K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-ec2-34-211-154-113.us-west-2.compute.amazonaws.com.log <==
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
at org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:988)
at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1019)
... 9 more
2019-01-02 22:21:10,710 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(211)) - Stopping NameNode metrics system...
2019-01-02 22:21:10,712 INFO impl.MetricsSinkAdapter (MetricsSinkAdapter.java:publishMetricsFromQueue(141)) - timeline thread interrupted.
2019-01-02 22:21:10,713 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(217)) - NameNode metrics system stopped.
2019-01-02 22:21:10,713 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(606)) - NameNode metrics system shutdown complete.
2019-01-02 22:21:10,713 ERROR namenode.NameNode (NameNode.java:main(1783)) - Failed to start namenode.
java.net.BindException: Port in use: ec2-34-211-154-113.us-west-2.compute.amazonaws.com:50070
at org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1001)
at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1023)
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1080)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:937)
at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:170)
at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:942)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:755)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:1001)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:985)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1710)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1778)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
at org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:988)
at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1019)
... 9 more
2019-01-02 22:21:10,715 INFO util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1
2019-01-02 22:21:10,718 INFO timeline.HadoopTimelineMetricsSink (AbstractTimelineMetricsSink.java:getCurrentCollectorHost(278)) - No live collector to send metrics to. Metrics to be sent will be discarded. This message will be skipped for the next 20 times.
2019-01-02 22:21:10,722 INFO namenode.NameNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ec2-34-211-154-113.us-west-2.compute.amazonaws.com/34.211.154.113
************************************************************/
==> /var/log/hadoop/hdfs/SecurityAuth.audit <==
Command failed after 1 tries
<script id="metamorph-23725-end" type="text/x-placeholder"></script>
Let me know what else can be done. This setup has been going for a while actually. I'm using 2.6.4. on purpose, as I use it in production and I want to gain in depth knowledge. TY all! Dan
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
08-31-2018
11:43 AM
Thank you. Not the answer I was hoping for, but at least I know Zeppelin upgrade is something I can look for. My only question now is IF zeppelin can be upgraded without upgrading the rest of the platform?
... View more
08-30-2018
04:09 PM
Hi, I use 0.7.3 version of Zeppelin and I noticed it stores notebooks at predefined folder *without* encryption. I use current installation on my PoC cluster, but I still have several users. These users are, for example, using Zeppelin to access some of the databases available in our environment. If user creates a notebook where he sets a db connection using his user/pass, then some other user can see that password using /opt/hadoop/zeppelin-server/notebook/some_notebook/note.json I see that users can set encryption when S3 is used, but I'm wondering if same is true for Git repo or Local Filesystem (VFSNotebookRepo or GitNotebookRepo). Is there a way to set security on this level and prevent shell users from reading each others notebooks. My temporary fix was to remove r and x permissions from "others", but I don't find this to be a real solution. Thank you
... View more
Labels:
- Labels:
-
Apache Zeppelin