Member since
11-01-2017
12
Posts
0
Kudos Received
0
Solutions
11-07-2018
01:31 PM
Thanks for this @rchaman Do you have a best practice suggestion for for \n ingestion via SAS - via libname?
... View more
10-29-2018
07:39 PM
Hi All, I get the following error - Could not get the namenode ID of this node. You may run zkfc on the node other than namenode. at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.create(DFSZKFailoverController.java:136) at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:187) At the "Finalize HA Setup" stage stderr:
/var/lib/ambari-agent/data/errors-1233.txt
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/zkfc_slave.py", line 173, in <module>
ZkfcSlave().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 375, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/zkfc_slave.py", line 58, in start
ZkfcSlaveDefault.start_static(env, upgrade_type)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/zkfc_slave.py", line 84, in start_static
create_log_dir=True
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py", line 276, in service
Execute(daemon_cmd, not_if=process_id_exists_command, environment=hadoop_env_exports)
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 262, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/2.6.5.0-292/hadoop/sbin/hadoop-daemon.sh --config /usr/hdp/2.6.5.0-292/hadoop/conf start zkfc'' returned 1. starting zkfc, logging to /var/log/hadoop/hdfs/hadoop-hdfs-zkfc-omiprihdp03ap.mufep.net.out
Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: Could not get the namenode ID of this node. You may run zkfc on the node other than namenode.
at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.create(DFSZKFailoverController.java:136)
at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:187)
stdout: /var/lib/ambari-agent/data/output-1233.txt 2018-10-29 16:26:37,039 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.0-292 -> 2.6.5.0-292
2018-10-29 16:26:37,052 - Using hadoop conf dir: /usr/hdp/2.6.5.0-292/hadoop/conf
2018-10-29 16:26:37,215 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.0-292 -> 2.6.5.0-292
2018-10-29 16:26:37,219 - Using hadoop conf dir: /usr/hdp/2.6.5.0-292/hadoop/conf
2018-10-29 16:26:37,220 - Group['kms'] {}
2018-10-29 16:26:37,221 - Group['livy'] {}
2018-10-29 16:26:37,222 - Group['spark'] {}
2018-10-29 16:26:37,222 - Group['ranger'] {}
2018-10-29 16:26:37,222 - Group['hdfs'] {}
2018-10-29 16:26:37,222 - Group['zeppelin'] {}
2018-10-29 16:26:37,222 - Group['hadoop'] {}
2018-10-29 16:26:37,222 - Group['users'] {}
2018-10-29 16:26:37,223 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 16:26:37,224 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 16:26:37,225 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 16:26:37,225 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 16:26:37,226 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 16:26:37,227 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-10-29 16:26:37,228 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 16:26:37,229 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-10-29 16:26:37,229 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'ranger'], 'uid': None}
2018-10-29 16:26:37,230 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-10-29 16:26:37,231 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'zeppelin', u'hadoop'], 'uid': None}
2018-10-29 16:26:37,232 - User['kms'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 16:26:37,233 - User['accumulo'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 16:26:37,234 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 16:26:37,234 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 16:26:37,235 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-10-29 16:26:37,236 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 16:26:37,237 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 16:26:37,238 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None}
2018-10-29 16:26:37,238 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 16:26:37,239 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 16:26:37,240 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 16:26:37,241 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 16:26:37,242 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 16:26:37,242 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-10-29 16:26:37,244 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2018-10-29 16:26:37,251 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2018-10-29 16:26:37,251 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2018-10-29 16:26:37,252 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-10-29 16:26:37,254 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-10-29 16:26:37,254 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2018-10-29 16:26:37,265 - call returned (0, '1030')
2018-10-29 16:26:37,266 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1030'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2018-10-29 16:26:37,272 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1030'] due to not_if
2018-10-29 16:26:37,273 - Group['hdfs'] {}
2018-10-29 16:26:37,273 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', u'hdfs']}
2018-10-29 16:26:37,274 - FS Type:
2018-10-29 16:26:37,274 - Directory['/etc/hadoop'] {'mode': 0755}
2018-10-29 16:26:37,286 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2018-10-29 16:26:37,287 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2018-10-29 16:26:37,301 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2018-10-29 16:26:37,312 - Skipping Execute[('setenforce', '0')] due to not_if
2018-10-29 16:26:37,312 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2018-10-29 16:26:37,315 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2018-10-29 16:26:37,315 - Changing owner for /var/run/hadoop from 1020 to root
2018-10-29 16:26:37,315 - Changing group for /var/run/hadoop from 1000 to root
2018-10-29 16:26:37,315 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2018-10-29 16:26:37,319 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2018-10-29 16:26:37,320 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2018-10-29 16:26:37,325 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2018-10-29 16:26:37,333 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2018-10-29 16:26:37,334 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2018-10-29 16:26:37,334 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2018-10-29 16:26:37,337 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2018-10-29 16:26:37,342 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2018-10-29 16:26:37,620 - Using hadoop conf dir: /usr/hdp/2.6.5.0-292/hadoop/conf
2018-10-29 16:26:37,621 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.0-292 -> 2.6.5.0-292
2018-10-29 16:26:37,638 - Using hadoop conf dir: /usr/hdp/2.6.5.0-292/hadoop/conf
2018-10-29 16:26:37,650 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2018-10-29 16:26:37,655 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2018-10-29 16:26:37,655 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.5.0-292/hadoop/conf', 'configuration_attributes': {}, 'configurations': ...}
2018-10-29 16:26:37,661 - Generating config: /usr/hdp/2.6.5.0-292/hadoop/conf/hadoop-policy.xml
2018-10-29 16:26:37,662 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2018-10-29 16:26:37,668 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.5.0-292/hadoop/conf', 'configuration_attributes': {}, 'configurations': ...}
2018-10-29 16:26:37,674 - Generating config: /usr/hdp/2.6.5.0-292/hadoop/conf/ssl-client.xml
2018-10-29 16:26:37,674 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2018-10-29 16:26:37,678 - Directory['/usr/hdp/2.6.5.0-292/hadoop/conf/secure'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2018-10-29 16:26:37,679 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.5.0-292/hadoop/conf/secure', 'configuration_attributes': {}, 'configurations': ...}
2018-10-29 16:26:37,685 - Generating config: /usr/hdp/2.6.5.0-292/hadoop/conf/secure/ssl-client.xml
2018-10-29 16:26:37,685 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2018-10-29 16:26:37,689 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.5.0-292/hadoop/conf', 'configuration_attributes': {}, 'configurations': ...}
2018-10-29 16:26:37,695 - Generating config: /usr/hdp/2.6.5.0-292/hadoop/conf/ssl-server.xml
2018-10-29 16:26:37,695 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2018-10-29 16:26:37,701 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.5.0-292/hadoop/conf', 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true'}}, 'configurations': ...}
2018-10-29 16:26:37,707 - Generating config: /usr/hdp/2.6.5.0-292/hadoop/conf/hdfs-site.xml
2018-10-29 16:26:37,707 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2018-10-29 16:26:37,744 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.5.0-292/hadoop/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'hdfs', 'configurations': ...}
2018-10-29 16:26:37,750 - Generating config: /usr/hdp/2.6.5.0-292/hadoop/conf/core-site.xml
2018-10-29 16:26:37,750 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2018-10-29 16:26:37,772 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'hdfs'}
2018-10-29 16:26:37,772 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.0-292 -> 2.6.5.0-292
2018-10-29 16:26:37,775 - Skipping setting up secure ZNode ACL for HFDS as it's supported only for secure clusters.
2018-10-29 16:26:37,775 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755}
2018-10-29 16:26:37,775 - Changing owner for /var/run/hadoop from 0 to hdfs
2018-10-29 16:26:37,775 - Changing group for /var/run/hadoop from 0 to hadoop
2018-10-29 16:26:37,775 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755}
2018-10-29 16:26:37,775 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2018-10-29 16:26:37,776 - Directory['/var/log/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2018-10-29 16:26:37,776 - File['/var/run/hadoop/hdfs/hadoop-hdfs-zkfc.pid'] {'action': ['delete'], 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-zkfc.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-zkfc.pid'}
2018-10-29 16:26:37,800 - Deleting File['/var/run/hadoop/hdfs/hadoop-hdfs-zkfc.pid']
2018-10-29 16:26:37,801 - Execute['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/2.6.5.0-292/hadoop/sbin/hadoop-daemon.sh --config /usr/hdp/2.6.5.0-292/hadoop/conf start zkfc''] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/2.6.5.0-292/hadoop/libexec'}, 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-zkfc.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-zkfc.pid'}
2018-10-29 16:26:41,965 - Execute['find /var/log/hadoop/hdfs -maxdepth 1 -type f -name '*' -exec echo '==> {} <==' \; -exec tail -n 40 {} \;'] {'logoutput': True, 'ignore_failures': True, 'user': 'hdfs'}
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-omiprihdp03ap.mufep.net.out.3 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 768541
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201810240133 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(183554216k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-24T01:33:05.852+0200: 1.362: [GC (Allocation Failure) 2018-10-24T01:33:05.852+0200: 1.362: [ParNew: 209792K->24212K(235968K), 0.0630501 secs] 209792K->40598K(2070976K), 0.0631829 secs] [Times: user=0.44 sys=0.02, real=0.07 secs]
2018-10-24T01:34:07.896+0200: 63.406: [GC (CMS Initial Mark) [1 CMS-initial-mark: 16386K(1835008K)] 120652K(2070976K), 0.0079488 secs] [Times: user=0.04 sys=0.00, real=0.01 secs]
2018-10-24T01:34:07.904+0200: 63.414: [CMS-concurrent-mark-start]
2018-10-24T01:34:07.908+0200: 63.418: [CMS-concurrent-mark: 0.004/0.004 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2018-10-24T01:34:07.908+0200: 63.418: [CMS-concurrent-preclean-start]
2018-10-24T01:34:07.914+0200: 63.424: [CMS-concurrent-preclean: 0.006/0.006 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
2018-10-24T01:34:07.914+0200: 63.424: [CMS-concurrent-abortable-preclean-start]
CMS: abort preclean due to time 2018-10-24T01:34:13.010+0200: 68.520: [CMS-concurrent-abortable-preclean: 1.216/5.096 secs] [Times: user=1.22 sys=0.00, real=5.09 secs]
2018-10-24T01:34:13.010+0200: 68.520: [GC (CMS Final Remark) [YG occupancy: 104965 K (235968 K)]2018-10-24T01:34:13.010+0200: 68.520: [Rescan (parallel) , 0.0072251 secs]2018-10-24T01:34:13.017+0200: 68.527: [weak refs processing, 0.0000252 secs]2018-10-24T01:34:13.017+0200: 68.527: [class unloading, 0.0036285 secs]2018-10-24T01:34:13.021+0200: 68.531: [scrub symbol table, 0.0054746 secs]2018-10-24T01:34:13.026+0200: 68.536: [scrub string table, 0.0003966 secs][1 CMS-remark: 16386K(1835008K)] 121351K(2070976K), 0.0174924 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
2018-10-24T01:34:13.028+0200: 68.538: [CMS-concurrent-sweep-start]
2018-10-24T01:34:13.028+0200: 68.539: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2018-10-24T01:34:13.028+0200: 68.539: [CMS-concurrent-reset-start]
2018-10-24T01:34:13.037+0200: 68.548: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00 sys=0.01, real=0.01 secs]
2018-10-24T03:01:07.137+0200: 5282.647: [GC (Allocation Failure) 2018-10-24T03:01:07.137+0200: 5282.647: [ParNew: 234004K->7440K(235968K), 0.0530361 secs] 250390K->39189K(2070976K), 0.0531219 secs] [Times: user=0.31 sys=0.02, real=0.05 secs]
2018-10-24T06:12:08.520+0200: 16744.030: [GC (Allocation Failure) 2018-10-24T06:12:08.520+0200: 16744.030: [ParNew: 217232K->2331K(235968K), 0.0068738 secs] 248981K->34081K(2070976K), 0.0069606 secs] [Times: user=0.04 sys=0.00, real=0.01 secs]
2018-10-24T10:23:10.158+0200: 31805.668: [GC (Allocation Failure) 2018-10-24T10:23:10.158+0200: 31805.668: [ParNew: 212123K->1974K(235968K), 0.0066338 secs] 243873K->33724K(2070976K), 0.0067391 secs] [Times: user=0.04 sys=0.00, real=0.01 secs]
Heap
par new generation total 235968K, used 40729K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 18% used [0x0000000080000000, 0x00000000825d8ab0, 0x000000008cce0000)
from space 26176K, 7% used [0x000000008cce0000, 0x000000008cecdab0, 0x000000008e670000)
to space 26176K, 0% used [0x000000008e670000, 0x000000008e670000, 0x0000000090000000)
concurrent mark-sweep generation total 1835008K, used 31749K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 25564K, capacity 25848K, committed 26160K, reserved 1073152K
class space used 2709K, capacity 2810K, committed 2864K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-secondarynamenode-omiprihdp03ap.mufep.net.log <==
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:290)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:202)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:184)
at com.sun.proxy.$Proxy11.rollEditLog(Unknown Source)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:522)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:405)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$1.run(SecondaryNameNode.java:371)
at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:476)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:367)
at java.lang.Thread.run(Thread.java:748)
2018-10-29 11:31:00,115 INFO namenode.SecondaryNameNode (SecondaryNameNode.java:run(465)) - Image has changed. Downloading updated image from NN.
2018-10-29 11:31:00,115 INFO namenode.TransferFsImage (TransferFsImage.java:getFileClient(414)) - Opening connection to http://omiprihdp02ap.mufep.net:50070/imagetransfer?getimage=1&txid=191958&storageInfo=-63:732195773:0:CID-7ae2eeee-3d0f-46a8-8707-18108377330d
2018-10-29 11:31:00,119 INFO namenode.TransferFsImage (TransferFsImage.java:receiveFile(592)) - Combined time for fsimage download and fsync to all disks took 0.00s. The fsimage download took 0.00s at 138000.00 KB/s. Synchronous (fsync) write to disk of /hadoop/hdfs/namesecondary/current/fsimage.ckpt_0000000000000191958 took 0.00s.
2018-10-29 11:31:00,119 INFO namenode.TransferFsImage (TransferFsImage.java:downloadImageToStorage(116)) - Downloaded file fsimage.ckpt_0000000000000191958 size 142071 bytes.
2018-10-29 11:31:00,120 INFO namenode.TransferFsImage (TransferFsImage.java:getFileClient(414)) - Opening connection to http://omiprihdp02ap.mufep.net:50070/imagetransfer?getedit=1&startTxId=191959&endTxId=192022&storageInfo=-63:732195773:0:CID-7ae2eeee-3d0f-46a8-8707-18108377330d
2018-10-29 11:31:00,122 INFO namenode.TransferFsImage (TransferFsImage.java:receiveFile(592)) - Combined time for fsimage download and fsync to all disks took 0.00s. The fsimage download took 0.00s at 8000.00 KB/s. Synchronous (fsync) write to disk of /hadoop/hdfs/namesecondary/current/edits_tmp_0000000000000191959-0000000000000192022_0000000000433577912 took 0.00s.
2018-10-29 11:31:00,122 INFO namenode.TransferFsImage (TransferFsImage.java:downloadEditsToStorage(169)) - Downloaded file edits_tmp_0000000000000191959-0000000000000192022_0000000000433577912 size 0 bytes.
2018-10-29 11:31:00,133 INFO namenode.FSImageFormatPBINode (FSImageFormatPBINode.java:loadINodeSection(257)) - Loading 1744 INodes.
2018-10-29 11:31:00,140 INFO namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:load(184)) - Loaded FSImage in 0 seconds.
2018-10-29 11:31:00,140 INFO namenode.FSImage (FSImage.java:loadFSImage(911)) - Loaded image for txid 191958 from /hadoop/hdfs/namesecondary/current/fsimage_0000000000000191958
2018-10-29 11:31:00,140 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 3 entries 128 lookups
2018-10-29 11:31:00,141 INFO namenode.Checkpointer (Checkpointer.java:rollForwardByApplyingLogs(313)) - Checkpointer about to load edits from 1 stream(s).
2018-10-29 11:31:00,141 INFO namenode.FSImage (FSImage.java:loadEdits(849)) - Reading /hadoop/hdfs/namesecondary/current/edits_0000000000000191959-0000000000000192022 expecting start txid #191959
2018-10-29 11:31:00,141 INFO namenode.FSImage (FSEditLogLoader.java:loadFSEdits(142)) - Start loading edits file /hadoop/hdfs/namesecondary/current/edits_0000000000000191959-0000000000000192022
2018-10-29 11:31:00,142 INFO namenode.FSImage (FSEditLogLoader.java:loadFSEdits(145)) - Edits file /hadoop/hdfs/namesecondary/current/edits_0000000000000191959-0000000000000192022 of size 8272 edits # 64 loaded in 0 seconds
2018-10-29 11:31:00,142 INFO namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:save(417)) - Saving image file /hadoop/hdfs/namesecondary/current/fsimage.ckpt_0000000000000192022 using no compression
2018-10-29 11:31:00,149 INFO namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:save(421)) - Image file /hadoop/hdfs/namesecondary/current/fsimage.ckpt_0000000000000192022 of size 141653 bytes saved in 0 seconds .
2018-10-29 11:31:00,151 INFO namenode.NNStorageRetentionManager (NNStorageRetentionManager.java:getImageTxIdToRetain(203)) - Going to retain 2 images with txid >= 191958
2018-10-29 11:31:00,151 INFO namenode.NNStorageRetentionManager (NNStorageRetentionManager.java:purgeImage(225)) - Purging old image FSImageFile(file=/hadoop/hdfs/namesecondary/current/fsimage_0000000000000182244, cpktTxId=0000000000000182244)
2018-10-29 11:31:00,152 INFO namenode.NNStorageRetentionManager (NNStorageRetentionManager.java:purgeImage(225)) - Purging old image FSImageFile(file=/hadoop/hdfs/namesecondary/current/fsimage_0000000000000172464, cpktTxId=0000000000000172464)
2018-10-29 11:31:00,157 INFO namenode.TransferFsImage (TransferFsImage.java:copyFileToStream(395)) - Sending fileName: /hadoop/hdfs/namesecondary/current/fsimage_0000000000000192022, fileSize: 141653. Sent total: 141653 bytes. Size of last segment intended to send: -1 bytes.
2018-10-29 11:31:00,163 INFO namenode.TransferFsImage (TransferFsImage.java:uploadImageFromStorage(238)) - Uploaded image with txid 192022 to namenode at http://omiprihdp02ap.mufep.net:50070 in 0.008 seconds
2018-10-29 11:31:00,163 WARN namenode.SecondaryNameNode (SecondaryNameNode.java:doCheckpoint(576)) - Checkpoint done. New Image Size: 141653
2018-10-29 11:52:31,427 ERROR namenode.SecondaryNameNode (LogAdapter.java:error(69)) - RECEIVED SIGNAL 15: SIGTERM
2018-10-29 11:52:31,429 INFO namenode.SecondaryNameNode (LogAdapter.java:info(45)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down SecondaryNameNode at omiprihdp03ap.mufep.net/10.6.7.23
************************************************************/
==> /var/log/hadoop/hdfs/SecurityAuth.audit <==
==> /var/log/hadoop/hdfs/hdfs-audit.log <==
==> /var/log/hadoop/hdfs/gc.log-201810291544 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184342048k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-29T15:44:22.568+0200: 1.253: [GC (Allocation Failure) 2018-10-29T15:44:22.568+0200: 1.254: [ParNew: 209792K->14136K(235968K), 0.0130374 secs] 209792K->14136K(2070976K), 0.0137064 secs] [Times: user=0.05 sys=0.01, real=0.02 secs]
Heap
par new generation total 235968K, used 96401K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 39% used [0x0000000080000000, 0x0000000085056448, 0x000000008cce0000)
from space 26176K, 54% used [0x000000008e670000, 0x000000008f43e310, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21468K, capacity 21750K, committed 21960K, reserved 1069056K
class space used 2445K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810241115 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(195238760k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-24T11:15:34.052+0200: 1.648: [GC (Allocation Failure) 2018-10-24T11:15:34.052+0200: 1.648: [ParNew: 209792K->24380K(235968K), 0.0762337 secs] 209792K->40766K(2070976K), 0.0763600 secs] [Times: user=0.54 sys=0.02, real=0.07 secs]
2018-10-24T11:16:36.116+0200: 63.712: [GC (CMS Initial Mark) [1 CMS-initial-mark: 16386K(1835008K)] 80153K(2070976K), 0.0080867 secs] [Times: user=0.03 sys=0.00, real=0.01 secs]
2018-10-24T11:16:36.125+0200: 63.721: [CMS-concurrent-mark-start]
2018-10-24T11:16:36.129+0200: 63.725: [CMS-concurrent-mark: 0.005/0.005 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2018-10-24T11:16:36.129+0200: 63.725: [CMS-concurrent-preclean-start]
2018-10-24T11:16:36.132+0200: 63.728: [CMS-concurrent-preclean: 0.003/0.003 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
2018-10-24T11:16:36.132+0200: 63.728: [CMS-concurrent-abortable-preclean-start]
CMS: abort preclean due to time 2018-10-24T11:16:41.173+0200: 68.769: [CMS-concurrent-abortable-preclean: 1.232/5.041 secs] [Times: user=1.23 sys=0.01, real=5.04 secs]
2018-10-24T11:16:41.173+0200: 68.769: [GC (CMS Final Remark) [YG occupancy: 63767 K (235968 K)]2018-10-24T11:16:41.173+0200: 68.769: [Rescan (parallel) , 0.0077752 secs]2018-10-24T11:16:41.181+0200: 68.777: [weak refs processing, 0.0000230 secs]2018-10-24T11:16:41.181+0200: 68.777: [class unloading, 0.0029923 secs]2018-10-24T11:16:41.184+0200: 68.780: [scrub symbol table, 0.0040177 secs]2018-10-24T11:16:41.188+0200: 68.784: [scrub string table, 0.0003500 secs][1 CMS-remark: 16386K(1835008K)] 80153K(2070976K), 0.0158982 secs] [Times: user=0.07 sys=0.00, real=0.01 secs]
2018-10-24T11:16:41.189+0200: 68.786: [CMS-concurrent-sweep-start]
2018-10-24T11:16:41.191+0200: 68.787: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
2018-10-24T11:16:41.191+0200: 68.787: [CMS-concurrent-reset-start]
2018-10-24T11:16:41.199+0200: 68.795: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
Heap
par new generation total 235968K, used 107999K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 39% used [0x0000000080000000, 0x00000000851a8c30, 0x000000008cce0000)
from space 26176K, 93% used [0x000000008e670000, 0x000000008fe3f2d0, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 16386K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21555K, capacity 21808K, committed 22216K, reserved 1069056K
class space used 2411K, capacity 2488K, committed 2508K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291417 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184370952k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-29T14:17:37.703+0200: 1.261: [GC (Allocation Failure) 2018-10-29T14:17:37.704+0200: 1.262: [ParNew: 209792K->14139K(235968K), 0.0119574 secs] 209792K->14139K(2070976K), 0.0126222 secs] [Times: user=0.05 sys=0.01, real=0.01 secs]
Heap
par new generation total 235968K, used 96404K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 39% used [0x0000000080000000, 0x0000000085056618, 0x000000008cce0000)
from space 26176K, 54% used [0x000000008e670000, 0x000000008f43ed88, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21425K, capacity 21686K, committed 21960K, reserved 1069056K
class space used 2436K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291456 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184356744k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-29T14:56:44.825+0200: 1.255: [GC (Allocation Failure) 2018-10-29T14:56:44.826+0200: 1.255: [ParNew: 209792K->14147K(235968K), 0.0134854 secs] 209792K->14147K(2070976K), 0.0142006 secs] [Times: user=0.05 sys=0.01, real=0.01 secs]
Heap
par new generation total 235968K, used 96413K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 39% used [0x0000000080000000, 0x0000000085056688, 0x000000008cce0000)
from space 26176K, 54% used [0x000000008e670000, 0x000000008f440e20, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21422K, capacity 21686K, committed 21960K, reserved 1069056K
class space used 2436K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291538 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184340624k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-29T15:38:34.008+0200: 1.302: [GC (Allocation Failure) 2018-10-29T15:38:34.009+0200: 1.302: [ParNew: 209792K->14151K(235968K), 0.0128067 secs] 209792K->14151K(2070976K), 0.0134940 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
Heap
par new generation total 235968K, used 98514K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 40% used [0x0000000080000000, 0x0000000085262c10, 0x000000008cce0000)
from space 26176K, 54% used [0x000000008e670000, 0x000000008f441de8, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21460K, capacity 21718K, committed 21960K, reserved 1069056K
class space used 2439K, capacity 2521K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810241138 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(194548592k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-24T11:38:02.324+0200: 1.282: [GC (Allocation Failure) 2018-10-24T11:38:02.324+0200: 1.282: [ParNew: 209792K->24379K(235968K), 0.1062387 secs] 209792K->40765K(2070976K), 0.1063616 secs] [Times: user=0.79 sys=0.01, real=0.10 secs]
2018-10-24T11:39:04.417+0200: 63.375: [GC (CMS Initial Mark) [1 CMS-initial-mark: 16386K(1835008K)] 78189K(2070976K), 0.0079674 secs] [Times: user=0.03 sys=0.00, real=0.01 secs]
2018-10-24T11:39:04.425+0200: 63.383: [CMS-concurrent-mark-start]
2018-10-24T11:39:04.430+0200: 63.388: [CMS-concurrent-mark: 0.004/0.004 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2018-10-24T11:39:04.430+0200: 63.388: [CMS-concurrent-preclean-start]
2018-10-24T11:39:04.432+0200: 63.390: [CMS-concurrent-preclean: 0.003/0.003 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
2018-10-24T11:39:04.432+0200: 63.390: [CMS-concurrent-abortable-preclean-start]
CMS: abort preclean due to time 2018-10-24T11:39:09.526+0200: 68.484: [CMS-concurrent-abortable-preclean: 1.252/5.094 secs] [Times: user=1.25 sys=0.01, real=5.09 secs]
2018-10-24T11:39:09.526+0200: 68.485: [GC (CMS Final Remark) [YG occupancy: 61803 K (235968 K)]2018-10-24T11:39:09.527+0200: 68.485: [Rescan (parallel) , 0.0073957 secs]2018-10-24T11:39:09.534+0200: 68.492: [weak refs processing, 0.0000243 secs]2018-10-24T11:39:09.534+0200: 68.492: [class unloading, 0.0031692 secs]2018-10-24T11:39:09.537+0200: 68.495: [scrub symbol table, 0.0045382 secs]2018-10-24T11:39:09.542+0200: 68.500: [scrub string table, 0.0004086 secs][1 CMS-remark: 16386K(1835008K)] 78189K(2070976K), 0.0161897 secs] [Times: user=0.07 sys=0.00, real=0.02 secs]
2018-10-24T11:39:09.543+0200: 68.501: [CMS-concurrent-sweep-start]
2018-10-24T11:39:09.544+0200: 68.502: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2018-10-24T11:39:09.544+0200: 68.502: [CMS-concurrent-reset-start]
2018-10-24T11:39:09.552+0200: 68.510: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00 sys=0.01, real=0.01 secs]
2018-10-24T13:29:03.258+0200: 6662.216: [GC (Allocation Failure) 2018-10-24T13:29:03.258+0200: 6662.216: [ParNew: 234171K->5658K(235968K), 0.0520074 secs] 250557K->37556K(2070976K), 0.0521378 secs] [Times: user=0.32 sys=0.02, real=0.05 secs]
Heap
par new generation total 235968K, used 31930K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 12% used [0x0000000080000000, 0x00000000819a7c18, 0x000000008cce0000)
from space 26176K, 21% used [0x000000008cce0000, 0x000000008d266bf8, 0x000000008e670000)
to space 26176K, 0% used [0x000000008e670000, 0x000000008e670000, 0x0000000090000000)
concurrent mark-sweep generation total 1835008K, used 31897K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21912K, capacity 22192K, committed 22472K, reserved 1069056K
class space used 2411K, capacity 2488K, committed 2508K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291230 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184369936k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-29T12:30:17.616+0200: 1.293: [GC (Allocation Failure) 2018-10-29T12:30:17.617+0200: 1.294: [ParNew: 209792K->14145K(235968K), 0.0123403 secs] 209792K->14145K(2070976K), 0.0130008 secs] [Times: user=0.05 sys=0.01, real=0.01 secs]
Heap
par new generation total 235968K, used 98508K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 40% used [0x0000000080000000, 0x0000000085262e48, 0x000000008cce0000)
from space 26176K, 54% used [0x000000008e670000, 0x000000008f440540, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21423K, capacity 21686K, committed 21960K, reserved 1069056K
class space used 2436K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291303 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184368176k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-29T13:03:23.130+0200: 1.185: [GC (Allocation Failure) 2018-10-29T13:03:23.130+0200: 1.186: [ParNew: 209792K->14144K(235968K), 0.0145294 secs] 209792K->14144K(2070976K), 0.0146526 secs] [Times: user=0.06 sys=0.01, real=0.02 secs]
Heap
par new generation total 235968K, used 98508K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 40% used [0x0000000080000000, 0x0000000085262df0, 0x000000008cce0000)
from space 26176K, 54% used [0x000000008e670000, 0x000000008f440228, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21416K, capacity 21686K, committed 21960K, reserved 1069056K
class space used 2440K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291319 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184368312k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-29T13:19:17.693+0200: 1.250: [GC (Allocation Failure) 2018-10-29T13:19:17.693+0200: 1.251: [ParNew: 209792K->14138K(235968K), 0.0131332 secs] 209792K->14138K(2070976K), 0.0137785 secs] [Times: user=0.05 sys=0.01, real=0.01 secs]
Heap
par new generation total 235968K, used 98501K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 40% used [0x0000000080000000, 0x0000000085262d80, 0x000000008cce0000)
from space 26176K, 54% used [0x000000008e670000, 0x000000008f43ea70, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21425K, capacity 21686K, committed 21960K, reserved 1069056K
class space used 2436K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291339 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184376140k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-29T13:39:49.252+0200: 1.223: [GC (Allocation Failure) 2018-10-29T13:39:49.252+0200: 1.223: [ParNew: 209792K->14144K(235968K), 0.0143870 secs] 209792K->14144K(2070976K), 0.0151843 secs] [Times: user=0.05 sys=0.02, real=0.01 secs]
Heap
par new generation total 235968K, used 96410K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 39% used [0x0000000080000000, 0x00000000850566c8, 0x000000008cce0000)
from space 26176K, 54% used [0x000000008e670000, 0x000000008f4403d0, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21422K, capacity 21686K, committed 21960K, reserved 1069056K
class space used 2436K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291536 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184172008k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-29T15:36:11.561+0200: 1.301: [GC (Allocation Failure) 2018-10-29T15:36:11.561+0200: 1.301: [ParNew: 209792K->14147K(235968K), 0.0145400 secs] 209792K->14147K(2070976K), 0.0147380 secs] [Times: user=0.06 sys=0.01, real=0.02 secs]
Heap
par new generation total 235968K, used 100608K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 41% used [0x0000000080000000, 0x000000008546f670, 0x000000008cce0000)
from space 26176K, 54% used [0x000000008e670000, 0x000000008f440c60, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21468K, capacity 21750K, committed 21960K, reserved 1069056K
class space used 2440K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810241341 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(187112732k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-24T13:41:25.777+0200: 1.302: [GC (Allocation Failure) 2018-10-24T13:41:25.777+0200: 1.302: [ParNew: 209792K->24580K(235968K), 0.1099037 secs] 209792K->40966K(2070976K), 0.1100410 secs] [Times: user=0.87 sys=0.02, real=0.11 secs]
2018-10-24T13:42:27.873+0200: 63.398: [GC (CMS Initial Mark) [1 CMS-initial-mark: 16386K(1835008K)] 84414K(2070976K), 0.0079132 secs] [Times: user=0.03 sys=0.00, real=0.01 secs]
2018-10-24T13:42:27.881+0200: 63.406: [CMS-concurrent-mark-start]
2018-10-24T13:42:27.885+0200: 63.410: [CMS-concurrent-mark: 0.004/0.004 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2018-10-24T13:42:27.885+0200: 63.410: [CMS-concurrent-preclean-start]
2018-10-24T13:42:27.888+0200: 63.413: [CMS-concurrent-preclean: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2018-10-24T13:42:27.888+0200: 63.413: [CMS-concurrent-abortable-preclean-start]
CMS: abort preclean due to time 2018-10-24T13:42:32.970+0200: 68.495: [CMS-concurrent-abortable-preclean: 1.270/5.082 secs] [Times: user=1.27 sys=0.01, real=5.08 secs]
2018-10-24T13:42:32.970+0200: 68.495: [GC (CMS Final Remark) [YG occupancy: 68028 K (235968 K)]2018-10-24T13:42:32.970+0200: 68.495: [Rescan (parallel) , 0.0077338 secs]2018-10-24T13:42:32.978+0200: 68.503: [weak refs processing, 0.0000556 secs]2018-10-24T13:42:32.978+0200: 68.503: [class unloading, 0.0032177 secs]2018-10-24T13:42:32.981+0200: 68.506: [scrub symbol table, 0.0043939 secs]2018-10-24T13:42:32.986+0200: 68.510: [scrub string table, 0.0003463 secs][1 CMS-remark: 16386K(1835008K)] 84414K(2070976K), 0.0164618 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
2018-10-24T13:42:32.987+0200: 68.512: [CMS-concurrent-sweep-start]
2018-10-24T13:42:32.988+0200: 68.512: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2018-10-24T13:42:32.988+0200: 68.512: [CMS-concurrent-reset-start]
2018-10-24T13:42:32.995+0200: 68.520: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
Heap
par new generation total 235968K, used 133373K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 51% used [0x0000000080000000, 0x0000000086a3e7a8, 0x000000008cce0000)
from space 26176K, 93% used [0x000000008e670000, 0x000000008fe71020, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 16386K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21630K, capacity 21880K, committed 22140K, reserved 1069056K
class space used 2412K, capacity 2488K, committed 2508K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291213 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184468500k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-29T12:13:38.797+0200: 1.264: [GC (Allocation Failure) 2018-10-29T12:13:38.798+0200: 1.264: [ParNew: 209792K->14147K(235968K), 0.0127328 secs] 209792K->14147K(2070976K), 0.0133953 secs] [Times: user=0.05 sys=0.01, real=0.01 secs]
Heap
par new generation total 235968K, used 96412K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 39% used [0x0000000080000000, 0x0000000085056658, 0x000000008cce0000)
from space 26176K, 54% used [0x000000008e670000, 0x000000008f440d80, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21422K, capacity 21686K, committed 21960K, reserved 1069056K
class space used 2436K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-omiprihdp03ap.mufep.net.out.5 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 768541
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201810291430 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184381788k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-29T14:30:38.013+0200: 1.275: [GC (Allocation Failure) 2018-10-29T14:30:38.014+0200: 1.275: [ParNew: 209792K->14162K(235968K), 0.0135859 secs] 209792K->14162K(2070976K), 0.0142507 secs] [Times: user=0.04 sys=0.02, real=0.01 secs]
Heap
par new generation total 235968K, used 96429K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 39% used [0x0000000080000000, 0x00000000850569b0, 0x000000008cce0000)
from space 26176K, 54% used [0x000000008e670000, 0x000000008f444be8, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21416K, capacity 21686K, committed 21960K, reserved 1069056K
class space used 2440K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291500 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184346728k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-29T15:00:37.691+0200: 1.251: [GC (Allocation Failure) 2018-10-29T15:00:37.692+0200: 1.252: [ParNew: 209792K->14133K(235968K), 0.0126972 secs] 209792K->14133K(2070976K), 0.0134547 secs] [Times: user=0.05 sys=0.01, real=0.01 secs]
Heap
par new generation total 235968K, used 98497K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 40% used [0x0000000080000000, 0x0000000085262e30, 0x000000008cce0000)
from space 26176K, 53% used [0x000000008e670000, 0x000000008f43d6f8, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21413K, capacity 21686K, committed 21960K, reserved 1069056K
class space used 2440K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291547 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184342928k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-29T15:47:22.279+0200: 1.240: [GC (Allocation Failure) 2018-10-29T15:47:22.280+0200: 1.240: [ParNew: 209792K->14147K(235968K), 0.0119965 secs] 209792K->14147K(2070976K), 0.0126317 secs] [Times: user=0.05 sys=0.01, real=0.02 secs]
Heap
par new generation total 235968K, used 98510K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 40% used [0x0000000080000000, 0x0000000085262e28, 0x000000008cce0000)
from space 26176K, 54% used [0x000000008e670000, 0x000000008f440d30, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21461K, capacity 21750K, committed 21960K, reserved 1069056K
class space used 2440K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810241423 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(173589320k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-24T14:23:45.451+0200: 1.342: [GC (Allocation Failure) 2018-10-24T14:23:45.451+0200: 1.342: [ParNew: 209792K->24587K(235968K), 0.0642295 secs] 209792K->40973K(2070976K), 0.0643522 secs] [Times: user=0.52 sys=0.02, real=0.06 secs]
2018-10-24T14:24:47.494+0200: 63.385: [GC (CMS Initial Mark) [1 CMS-initial-mark: 16386K(1835008K)] 84334K(2070976K), 0.0082104 secs] [Times: user=0.03 sys=0.01, real=0.01 secs]
2018-10-24T14:24:47.502+0200: 63.393: [CMS-concurrent-mark-start]
2018-10-24T14:24:47.505+0200: 63.396: [CMS-concurrent-mark: 0.004/0.004 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2018-10-24T14:24:47.505+0200: 63.396: [CMS-concurrent-preclean-start]
2018-10-24T14:24:47.508+0200: 63.399: [CMS-concurrent-preclean: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2018-10-24T14:24:47.508+0200: 63.399: [CMS-concurrent-abortable-preclean-start]
CMS: abort preclean due to time 2018-10-24T14:24:52.564+0200: 68.455: [CMS-concurrent-abortable-preclean: 1.249/5.056 secs] [Times: user=1.25 sys=0.00, real=5.06 secs]
2018-10-24T14:24:52.564+0200: 68.455: [GC (CMS Final Remark) [YG occupancy: 67948 K (235968 K)]2018-10-24T14:24:52.564+0200: 68.455: [Rescan (parallel) , 0.0076604 secs]2018-10-24T14:24:52.572+0200: 68.463: [weak refs processing, 0.0000248 secs]2018-10-24T14:24:52.572+0200: 68.463: [class unloading, 0.0028079 secs]2018-10-24T14:24:52.575+0200: 68.466: [scrub symbol table, 0.0036138 secs]2018-10-24T14:24:52.578+0200: 68.469: [scrub string table, 0.0003101 secs][1 CMS-remark: 16386K(1835008K)] 84334K(2070976K), 0.0150859 secs] [Times: user=0.07 sys=0.00, real=0.01 secs]
2018-10-24T14:24:52.579+0200: 68.470: [CMS-concurrent-sweep-start]
2018-10-24T14:24:52.579+0200: 68.470: [CMS-concurrent-sweep: 0.000/0.000 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2018-10-24T14:24:52.579+0200: 68.470: [CMS-concurrent-reset-start]
2018-10-24T14:24:52.588+0200: 68.479: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
Heap
par new generation total 235968K, used 117116K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 44% used [0x0000000080000000, 0x0000000085a5c380, 0x000000008cce0000)
from space 26176K, 93% used [0x000000008e670000, 0x000000008fe72e58, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 16386K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21576K, capacity 21848K, committed 22140K, reserved 1069056K
class space used 2405K, capacity 2456K, committed 2508K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-secondarynamenode-omiprihdp03ap.mufep.net.out.5 <==
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 768541
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-omiprihdp03ap.mufep.net.log <==
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1778)
Number of suppressed write-lock reports: 0
Longest write-lock held interval: 0
2018-10-29 15:47:22,494 INFO namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1302)) - Stopping services started for active state
2018-10-29 15:47:22,494 INFO namenode.FSNamesystem (FSNamesystem.java:writeUnlock(1689)) - FSNamesystem write lock held for 0 ms via
java.lang.Thread.getStackTrace(Thread.java:1559)
org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:945)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1690)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1339)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.close(FSNamesystem.java:1760)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:918)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:716)
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:697)
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:761)
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:1001)
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:985)
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1710)
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1778)
Number of suppressed write-lock reports: 0
Longest write-lock held interval: 0
2018-10-29 15:47:22,494 INFO namenode.FSNamesystem (FSNamesystem.java:stopStandbyServices(1392)) - Stopping services started for standby state
2018-10-29 15:47:22,495 ERROR namenode.NameNode (NameNode.java:main(1783)) - Failed to start namenode.
java.lang.IllegalStateException: Could not determine own NN ID in namespace 'OmiHdpPrdCluster'. Please ensure that this node is one of the machines listed as an NN RPC address, or configure dfs.ha.namenode.id
at com.google.common.base.Preconditions.checkState(Preconditions.java:172)
at org.apache.hadoop.hdfs.HAUtil.getNameNodeIdOfOtherNode(HAUtil.java:164)
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createBlockTokenSecretManager(BlockManager.java:442)
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.<init>(BlockManager.java:334)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:781)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:716)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:697)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:761)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:1001)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:985)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1710)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1778)
2018-10-29 15:47:22,496 INFO util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1
2018-10-29 15:47:22,497 INFO namenode.NameNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at omiprihdp03ap.mufep.net/10.6.7.23
************************************************************/
==> /var/log/hadoop/hdfs/gc.log-201810291223 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184366120k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-29T12:23:08.681+0200: 1.264: [GC (Allocation Failure) 2018-10-29T12:23:08.681+0200: 1.264: [ParNew: 209792K->14139K(235968K), 0.0126677 secs] 209792K->14139K(2070976K), 0.0128856 secs] [Times: user=0.04 sys=0.01, real=0.02 secs]
Heap
par new generation total 235968K, used 100601K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 41% used [0x0000000080000000, 0x000000008546f5c8, 0x000000008cce0000)
from space 26176K, 54% used [0x000000008e670000, 0x000000008f43ee78, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21418K, capacity 21686K, committed 21960K, reserved 1069056K
class space used 2436K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291325 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184371044k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-29T13:25:43.642+0200: 1.242: [GC (Allocation Failure) 2018-10-29T13:25:43.643+0200: 1.243: [ParNew: 209792K->14145K(235968K), 0.0125742 secs] 209792K->14145K(2070976K), 0.0133519 secs] [Times: user=0.05 sys=0.01, real=0.01 secs]
Heap
par new generation total 235968K, used 96411K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 39% used [0x0000000080000000, 0x0000000085056780, 0x000000008cce0000)
from space 26176K, 54% used [0x000000008e670000, 0x000000008f4405c8, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21419K, capacity 21686K, committed 21960K, reserved 1069056K
class space used 2436K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291341 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184375248k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-29T13:41:15.377+0200: 1.251: [GC (Allocation Failure) 2018-10-29T13:41:15.377+0200: 1.251: [ParNew: 209792K->14131K(235968K), 0.0121842 secs] 209792K->14131K(2070976K), 0.0123911 secs] [Times: user=0.05 sys=0.01, real=0.01 secs]
Heap
par new generation total 235968K, used 96396K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 39% used [0x0000000080000000, 0x00000000850564f0, 0x000000008cce0000)
from space 26176K, 53% used [0x000000008e670000, 0x000000008f43cc20, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21424K, capacity 21686K, committed 21960K, reserved 1069056K
class space used 2436K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-omiprihdp03ap.mufep.net.out.1 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 768541
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-zkfc-omiprihdp03ap.mufep.net.out.4 <==
Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: Could not get the namenode ID of this node. You may run zkfc on the node other than namenode.
at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.create(DFSZKFailoverController.java:136)
at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:187)
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 768541
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201810241449 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(171667804k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-24T14:49:15.104+0200: 1.257: [GC (Allocation Failure) 2018-10-24T14:49:15.105+0200: 1.257: [ParNew: 209792K->24576K(235968K), 0.0926388 secs] 209792K->40962K(2070976K), 0.0927928 secs] [Times: user=0.74 sys=0.02, real=0.09 secs]
2018-10-24T14:50:17.185+0200: 63.337: [GC (CMS Initial Mark) [1 CMS-initial-mark: 16386K(1835008K)] 84159K(2070976K), 0.0077603 secs] [Times: user=0.03 sys=0.00, real=0.01 secs]
2018-10-24T14:50:17.193+0200: 63.345: [CMS-concurrent-mark-start]
2018-10-24T14:50:17.197+0200: 63.349: [CMS-concurrent-mark: 0.004/0.004 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2018-10-24T14:50:17.197+0200: 63.349: [CMS-concurrent-preclean-start]
2018-10-24T14:50:17.199+0200: 63.352: [CMS-concurrent-preclean: 0.003/0.003 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2018-10-24T14:50:17.199+0200: 63.352: [CMS-concurrent-abortable-preclean-start]
CMS: abort preclean due to time 2018-10-24T14:50:22.275+0200: 68.427: [CMS-concurrent-abortable-preclean: 1.264/5.076 secs] [Times: user=1.26 sys=0.01, real=5.08 secs]
2018-10-24T14:50:22.276+0200: 68.428: [GC (CMS Final Remark) [YG occupancy: 67773 K (235968 K)]2018-10-24T14:50:22.276+0200: 68.428: [Rescan (parallel) , 0.0076826 secs]2018-10-24T14:50:22.283+0200: 68.436: [weak refs processing, 0.0000249 secs]2018-10-24T14:50:22.283+0200: 68.436: [class unloading, 0.0028875 secs]2018-10-24T14:50:22.286+0200: 68.439: [scrub symbol table, 0.0038950 secs]2018-10-24T14:50:22.290+0200: 68.442: [scrub string table, 0.0003330 secs][1 CMS-remark: 16386K(1835008K)] 84159K(2070976K), 0.0156261 secs] [Times: user=0.07 sys=0.00, real=0.02 secs]
2018-10-24T14:50:22.291+0200: 68.444: [CMS-concurrent-sweep-start]
2018-10-24T14:50:22.291+0200: 68.444: [CMS-concurrent-sweep: 0.000/0.000 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2018-10-24T14:50:22.291+0200: 68.444: [CMS-concurrent-reset-start]
2018-10-24T14:50:22.300+0200: 68.452: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
Heap
par new generation total 235968K, used 199094K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 83% used [0x0000000080000000, 0x000000008aa6db80, 0x000000008cce0000)
from space 26176K, 93% used [0x000000008e670000, 0x000000008fe70040, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 16386K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21750K, capacity 21976K, committed 22216K, reserved 1069056K
class space used 2405K, capacity 2456K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-omiprihdp03ap.mufep.net.out <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 768541
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201810291323 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184371504k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-29T13:23:56.564+0200: 1.250: [GC (Allocation Failure) 2018-10-29T13:23:56.564+0200: 1.251: [ParNew: 209792K->14156K(235968K), 0.0121555 secs] 209792K->14156K(2070976K), 0.0122764 secs] [Times: user=0.06 sys=0.01, real=0.01 secs]
Heap
par new generation total 235968K, used 96421K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 39% used [0x0000000080000000, 0x0000000085056608, 0x000000008cce0000)
from space 26176K, 54% used [0x000000008e670000, 0x000000008f443158, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21424K, capacity 21686K, committed 21960K, reserved 1069056K
class space used 2436K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291439 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184354416k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-29T14:39:57.276+0200: 1.240: [GC (Allocation Failure) 2018-10-29T14:39:57.277+0200: 1.241: [ParNew: 209792K->14131K(235968K), 0.0135101 secs] 209792K->14131K(2070976K), 0.0141651 secs] [Times: user=0.06 sys=0.01, real=0.01 secs]
Heap
par new generation total 235968K, used 96396K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 39% used [0x0000000080000000, 0x00000000850566d0, 0x000000008cce0000)
from space 26176K, 53% used [0x000000008e670000, 0x000000008f43cca8, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21421K, capacity 21686K, committed 21960K, reserved 1069056K
class space used 2440K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291507 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184351576k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-29T15:07:20.816+0200: 1.255: [GC (Allocation Failure) 2018-10-29T15:07:20.817+0200: 1.256: [ParNew: 209792K->14138K(235968K), 0.0126537 secs] 209792K->14138K(2070976K), 0.0133192 secs] [Times: user=0.05 sys=0.01, real=0.01 secs]
Heap
par new generation total 235968K, used 98501K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 40% used [0x0000000080000000, 0x0000000085262e38, 0x000000008cce0000)
from space 26176K, 54% used [0x000000008e670000, 0x000000008f43e998, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21421K, capacity 21686K, committed 21960K, reserved 1069056K
class space used 2440K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-zkfc-omiprihdp03ap.mufep.net.log <==
STARTUP_MSG: build = git@github.com:hortonworks/hadoop.git -r 3091053c59a62c82d82c9f778c48bde5ef0a89a1; compiled by 'jenkins' on 2018-05-11T07:53Z
STARTUP_MSG: java = 1.8.0_191
************************************************************/
2018-10-29 16:18:09,314 INFO tools.DFSZKFailoverController (LogAdapter.java:info(45)) - registered UNIX signal handlers for [TERM, HUP, INT]
2018-10-29 16:18:09,574 INFO tools.DFSZKFailoverController (LogAdapter.java:info(45)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DFSZKFailoverController at omiprihdp03ap.mufep.net/10.6.7.23
************************************************************/
2018-10-29 16:22:02,483 INFO tools.DFSZKFailoverController (LogAdapter.java:info(45)) - STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DFSZKFailoverController
STARTUP_MSG: user = hdfs
STARTUP_MSG: host = omiprihdp03ap.mufep.net/10.6.7.23
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.7.3.2.6.5.0-292
STARTUP_MSG: classpath = /usr/hdp/2.6.5.0-292/hadoop/conf:/usr/hdp/2.6.5.0-292/hadoop/lib/ojdbc6.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/nimbus-jose-jwt-4.41.1.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/ranger-hdfs-plugin-shim-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jackson-annotations-2.2.3.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/ranger-plugin-classloader-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jackson-core-2.2.3.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/ranger-yarn-plugin-shim-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/xmlenc-0.52.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/activation-1.1.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jettison-1.1.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jetty-sslengine-6.1.26.hwx.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/api-util-1.0.0-M20.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/asm-3.2.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/xz-1.0.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/avro-1.7.4.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jackson-databind-2.2.3.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/aws-java-sdk-core-1.10.6.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/aws-java-sdk-kms-1.10.6.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/joda-time-2.9.4.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/aws-java-sdk-s3-1.10.6.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/azure-keyvault-core-0.8.0.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jsch-0.1.54.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/azure-storage-5.4.0.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/json-smart-1.3.1.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/commons-cli-1.2.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jsp-api-2.1.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/commons-codec-1.4.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jackson-xc-1.9.13.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/commons-collections-3.2.2.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jsr305-3.0.0.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/commons-compress-1.4.1.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/commons-configuration-1.6.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/junit-4.11.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/commons-digester-1.8.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/commons-io-2.4.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/log4j-1.2.17.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/commons-lang-2.6.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/mockito-all-1.8.5.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/commons-lang3-3.4.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/netty-3.6.2.Final.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/commons-logging-1.1.3.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/paranamer-2.3.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/commons-math3-3.1.1.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/commons-net-3.1.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/curator-client-2.7.1.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/servlet-api-2.5.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/curator-framework-2.7.1.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/slf4j-api-1.7.10.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/curator-recipes-2.7.1.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/gson-2.2.4.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/guava-11.0.2.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/slf4j-log4j12-1.7.10.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/hamcrest-core-1.3.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jaxb-api-2.2.2.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/httpclient-4.5.2.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/httpcore-4.4.4.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/stax-api-1.0-2.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jcip-annotations-1.0-1.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jersey-core-1.9.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jersey-json-1.9.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/zookeeper-3.4.6.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jersey-server-1.9.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jets3t-0.9.0.jar:/usr/hdp/2.6.5.0-292/hadoop/.//azure-data-lake-store-sdk-2.2.5.jar:/usr/hdp/2.6.5.0-292/hadoop/.//gcs-connector-1.8.1.2.6.5.0-292-shaded.jar:/usr/hdp/2.6.5.0-292/hadoop/.//hadoop-annotations-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop/.//hadoop-annotations.jar:/usr/hdp/2.6.5.0-292/hadoop/.//hadoop-auth-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop/.//hadoop-auth.jar:/usr/hdp/2.6.5.0-292/hadoop/.//hadoop-aws-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop/.//hadoop-aws.jar:/usr/hdp/2.6.5.0-292/hadoop/.//hadoop-azure-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop/.//hadoop-azure-datalake-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop/.//hadoop-azure-datalake.jar:/usr/hdp/2.6.5.0-292/hadoop/.//hadoop-azure.jar:/usr/hdp/2.6.5.0-292/hadoop/.//hadoop-common-2.7.3.2.6.5.0-292-tests.jar:/usr/hdp/2.6.5.0-292/hadoop/.//hadoop-common-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop/.//hadoop-common-tests.jar:/usr/hdp/2.6.5.0-292/hadoop/.//hadoop-common.jar:/usr/hdp/2.6.5.0-292/hadoop/.//hadoop-nfs-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop/.//hadoop-nfs.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/./:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/asm-3.2.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/jackson-annotations-2.2.3.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/jackson-core-2.2.3.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/jackson-databind-2.2.3.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/netty-3.6.2.Final.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/netty-all-4.0.52.Final.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/okhttp-2.7.5.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/okio-1.6.0.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/xml-apis-1.3.04.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/.//hadoop-hdfs-2.7.3.2.6.5.0-292-tests.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/.//hadoop-hdfs-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/.//hadoop-hdfs-nfs-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/.//hadoop-hdfs-tests.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/activation-1.1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jsch-0.1.54.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jetty-sslengine-6.1.26.hwx.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/api-util-1.0.0-M20.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/asm-3.2.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/avro-1.7.4.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/azure-keyvault-core-0.8.0.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/azure-storage-5.4.0.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/json-smart-1.3.1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/javassist-3.18.1-GA.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jsp-api-2.1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/javax.inject-1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/commons-collections-3.2.2.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jsr305-3.0.0.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/commons-configuration-1.6.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/commons-digester-1.8.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/commons-io-2.4.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/metrics-core-3.0.1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/commons-lang3-3.4.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/netty-3.6.2.Final.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/nimbus-jose-jwt-4.41.1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/commons-math3-3.1.1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/commons-net-3.1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/objenesis-2.1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/curator-client-2.7.1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/paranamer-2.3.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/curator-framework-2.7.1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/curator-recipes-2.7.1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/fst-2.24.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/gson-2.2.4.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/guava-11.0.2.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/guice-3.0.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/httpclient-4.5.2.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/httpcore-4.4.4.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jackson-annotations-2.2.3.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jackson-core-2.2.3.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/xmlenc-0.52.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/xz-1.0.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jackson-databind-2.2.3.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jets3t-0.9.0.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/zookeeper-3.4.6.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jackson-xc-1.9.13.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jcip-annotations-1.0-1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jettison-1.1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/zookeeper-3.4.6.2.6.5.0-292-tests.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-api-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-client-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-common-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-registry-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-registry.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-server-common-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-server-tests-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/asm-3.2.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/avro-1.7.4.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/guice-3.0.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/junit-4.11.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/netty-3.6.2.Final.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/xz-1.0.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//activation-1.1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-rumen-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-gridmix-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-sls-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//api-util-1.0.0-M20.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//asm-3.2.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jcip-annotations-1.0-1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//avro-1.7.4.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//azure-keyvault-core-0.8.0.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-sls.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//commons-beanutils-1.7.0.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//commons-beanutils-core-1.8.0.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jettison-1.1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//commons-collections-3.2.2.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hamcrest-core-1.3.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//commons-compress-1.4.1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-tests.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//htrace-core-3.1.0-incubating.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//httpclient-4.5.2.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//httpcore-4.4.4.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jackson-core-asl-1.9.13.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//commons-lang3-3.4.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jackson-jaxrs-1.9.13.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jackson-xc-1.9.13.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//curator-client-2.7.1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//curator-framework-2.7.1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//curator-recipes-2.7.1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//gson-2.2.4.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jetty-util-6.1.26.hwx.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-ant-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jetty-6.1.26.hwx.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-ant.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//okhttp-2.7.5.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-archives-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jsr305-3.0.0.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-archives.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-auth-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jsch-0.1.54.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-auth.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-datajoin-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//json-smart-1.3.1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-distcp-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-openstack-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-extras-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//junit-4.11.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-extras.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//okio-1.6.0.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-openstack.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.7.3.2.6.5.0-292-tests.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-streaming-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jetty-sslengine-6.1.26.hwx.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//metrics-core-3.0.1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//mockito-all-1.8.5.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//netty-3.6.2.Final.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//nimbus-jose-jwt-4.41.1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//snappy-java-1.0.4.1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//xz-1.0.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//zookeeper-3.4.6.2.6.5.0-292.jar::/usr/hdp/2.6.5.0-292/tez/tez-api-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-common-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-dag-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-examples-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-history-parser-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-job-analyzer-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-mapreduce-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-runtime-internals-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-runtime-library-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-tests-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-yarn-timeline-cache-plugin-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-yarn-timeline-history-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-yarn-timeline-history-with-acls-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-yarn-timeline-history-with-fs-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/azure-data-lake-store-sdk-2.1.4.jar:/usr/hdp/2.6.5.0-292/tez/lib/commons-cli-1.2.jar:/usr/hdp/2.6.5.0-292/tez/lib/commons-codec-1.4.jar:/usr/hdp/2.6.5.0-292/tez/lib/commons-collections-3.2.2.jar:/usr/hdp/2.6.5.0-292/tez/lib/commons-collections4-4.1.jar:/usr/hdp/2.6.5.0-292/tez/lib/commons-io-2.4.jar:/usr/hdp/2.6.5.0-292/tez/lib/commons-lang-2.6.jar:/usr/hdp/2.6.5.0-292/tez/lib/commons-math3-3.1.1.jar:/usr/hdp/2.6.5.0-292/tez/lib/gcs-connector-1.8.1.2.6.5.0-292-shaded.jar:/usr/hdp/2.6.5.0-292/tez/lib/guava-11.0.2.jar:/usr/hdp/2.6.5.0-292/tez/lib/hadoop-annotations-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/hadoop-aws-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/hadoop-azure-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/hadoop-azure-datalake-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/hadoop-mapreduce-client-common-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/hadoop-mapreduce-client-core-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/hadoop-yarn-server-timeline-pluginstorage-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/hadoop-yarn-server-web-proxy-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/jersey-client-1.9.jar:/usr/hdp/2.6.5.0-292/tez/lib/jersey-json-1.9.jar:/usr/hdp/2.6.5.0-292/tez/lib/jettison-1.3.4.jar:/usr/hdp/2.6.5.0-292/tez/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.6.5.0-292/tez/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.6.5.0-292/tez/lib/jsr305-2.0.3.jar:/usr/hdp/2.6.5.0-292/tez/lib/metrics-core-3.1.0.jar:/usr/hdp/2.6.5.0-292/tez/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.6.5.0-292/tez/lib/servlet-api-2.5.jar:/usr/hdp/2.6.5.0-292/tez/lib/slf4j-api-1.7.5.jar:/usr/hdp/2.6.5.0-292/tez/conf:/usr/hdp/2.6.5.0-292/tez/tez-api-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-common-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-dag-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-examples-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-history-parser-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-job-analyzer-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-mapreduce-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-runtime-internals-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-runtime-library-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-tests-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-yarn-timeline-cache-plugin-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-yarn-timeline-history-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-yarn-timeline-history-with-acls-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-yarn-timeline-history-with-fs-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/azure-data-lake-store-sdk-2.1.4.jar:/usr/hdp/2.6.5.0-292/tez/lib/commons-cli-1.2.jar:/usr/hdp/2.6.5.0-292/tez/lib/commons-codec-1.4.jar:/usr/hdp/2.6.5.0-292/tez/lib/commons-collections-3.2.2.jar:/usr/hdp/2.6.5.0-292/tez/lib/commons-collections4-4.1.jar:/usr/hdp/2.6.5.0-292/tez/lib/commons-io-2.4.jar:/usr/hdp/2.6.5.0-292/tez/lib/commons-lang-2.6.jar:/usr/hdp/2.6.5.0-292/tez/lib/commons-math3-3.1.1.jar:/usr/hdp/2.6.5.0-292/tez/lib/gcs-connector-1.8.1.2.6.5.0-292-shaded.jar:/usr/hdp/2.6.5.0-292/tez/lib/guava-11.0.2.jar:/usr/hdp/2.6.5.0-292/tez/lib/hadoop-annotations-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/hadoop-aws-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/hadoop-azure-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/hadoop-azure-datalake-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/hadoop-mapreduce-client-common-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/hadoop-mapreduce-client-core-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/hadoop-yarn-server-timeline-pluginstorage-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/hadoop-yarn-server-web-proxy-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/jersey-client-1.9.jar:/usr/hdp/2.6.5.0-292/tez/lib/jersey-json-1.9.jar:/usr/hdp/2.6.5.0-292/tez/lib/jettison-1.3.4.jar:/usr/hdp/2.6.5.0-292/tez/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.6.5.0-292/tez/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.6.5.0-292/tez/lib/jsr305-2.0.3.jar:/usr/hdp/2.6.5.0-292/tez/lib/metrics-core-3.1.0.jar:/usr/hdp/2.6.5.0-292/tez/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.6.5.0-292/tez/lib/servlet-api-2.5.jar:/usr/hdp/2.6.5.0-292/tez/lib/slf4j-api-1.7.5.jar:/usr/hdp/2.6.5.0-292/tez/conf
STARTUP_MSG: build = git@github.com:hortonworks/hadoop.git -r 3091053c59a62c82d82c9f778c48bde5ef0a89a1; compiled by 'jenkins' on 2018-05-11T07:53Z
STARTUP_MSG: java = 1.8.0_191
************************************************************/
2018-10-29 16:22:02,493 INFO tools.DFSZKFailoverController (LogAdapter.java:info(45)) - registered UNIX signal handlers for [TERM, HUP, INT]
2018-10-29 16:22:02,724 INFO tools.DFSZKFailoverController (LogAdapter.java:info(45)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DFSZKFailoverController at omiprihdp03ap.mufep.net/10.6.7.23
************************************************************/
2018-10-29 16:26:38,350 INFO tools.DFSZKFailoverController (LogAdapter.java:info(45)) - STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DFSZKFailoverController
STARTUP_MSG: user = hdfs
STARTUP_MSG: host = omiprihdp03ap.mufep.net/10.6.7.23
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.7.3.2.6.5.0-292
STARTUP_MSG: classpath = /usr/hdp/2.6.5.0-292/hadoop/conf:/usr/hdp/2.6.5.0-292/hadoop/lib/ojdbc6.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/nimbus-jose-jwt-4.41.1.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/ranger-hdfs-plugin-shim-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jackson-annotations-2.2.3.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/ranger-plugin-classloader-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jackson-core-2.2.3.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/ranger-yarn-plugin-shim-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/xmlenc-0.52.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/activation-1.1.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jettison-1.1.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jetty-sslengine-6.1.26.hwx.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/api-util-1.0.0-M20.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/asm-3.2.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/xz-1.0.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/avro-1.7.4.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jackson-databind-2.2.3.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/aws-java-sdk-core-1.10.6.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/aws-java-sdk-kms-1.10.6.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/joda-time-2.9.4.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/aws-java-sdk-s3-1.10.6.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/azure-keyvault-core-0.8.0.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jsch-0.1.54.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/azure-storage-5.4.0.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/json-smart-1.3.1.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/commons-cli-1.2.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jsp-api-2.1.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/commons-codec-1.4.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jackson-xc-1.9.13.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/commons-collections-3.2.2.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jsr305-3.0.0.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/commons-compress-1.4.1.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/commons-configuration-1.6.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/junit-4.11.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/commons-digester-1.8.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/commons-io-2.4.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/log4j-1.2.17.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/commons-lang-2.6.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/mockito-all-1.8.5.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/commons-lang3-3.4.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/netty-3.6.2.Final.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/commons-logging-1.1.3.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/paranamer-2.3.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/commons-math3-3.1.1.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/commons-net-3.1.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/curator-client-2.7.1.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/servlet-api-2.5.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/curator-framework-2.7.1.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/slf4j-api-1.7.10.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/curator-recipes-2.7.1.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/gson-2.2.4.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/guava-11.0.2.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/slf4j-log4j12-1.7.10.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/hamcrest-core-1.3.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jaxb-api-2.2.2.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/httpclient-4.5.2.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/httpcore-4.4.4.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/stax-api-1.0-2.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jcip-annotations-1.0-1.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jersey-core-1.9.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jersey-json-1.9.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/zookeeper-3.4.6.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jersey-server-1.9.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/jets3t-0.9.0.jar:/usr/hdp/2.6.5.0-292/hadoop/.//azure-data-lake-store-sdk-2.2.5.jar:/usr/hdp/2.6.5.0-292/hadoop/.//gcs-connector-1.8.1.2.6.5.0-292-shaded.jar:/usr/hdp/2.6.5.0-292/hadoop/.//hadoop-annotations-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop/.//hadoop-annotations.jar:/usr/hdp/2.6.5.0-292/hadoop/.//hadoop-auth-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop/.//hadoop-auth.jar:/usr/hdp/2.6.5.0-292/hadoop/.//hadoop-aws-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop/.//hadoop-aws.jar:/usr/hdp/2.6.5.0-292/hadoop/.//hadoop-azure-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop/.//hadoop-azure-datalake-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop/.//hadoop-azure-datalake.jar:/usr/hdp/2.6.5.0-292/hadoop/.//hadoop-azure.jar:/usr/hdp/2.6.5.0-292/hadoop/.//hadoop-common-2.7.3.2.6.5.0-292-tests.jar:/usr/hdp/2.6.5.0-292/hadoop/.//hadoop-common-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop/.//hadoop-common-tests.jar:/usr/hdp/2.6.5.0-292/hadoop/.//hadoop-common.jar:/usr/hdp/2.6.5.0-292/hadoop/.//hadoop-nfs-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop/.//hadoop-nfs.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/./:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/asm-3.2.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/jackson-annotations-2.2.3.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/jackson-core-2.2.3.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/jackson-databind-2.2.3.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/netty-3.6.2.Final.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/netty-all-4.0.52.Final.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/okhttp-2.7.5.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/okio-1.6.0.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/xml-apis-1.3.04.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/.//hadoop-hdfs-2.7.3.2.6.5.0-292-tests.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/.//hadoop-hdfs-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/.//hadoop-hdfs-nfs-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/.//hadoop-hdfs-tests.jar:/usr/hdp/2.6.5.0-292/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/activation-1.1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jsch-0.1.54.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jetty-sslengine-6.1.26.hwx.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/api-util-1.0.0-M20.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/asm-3.2.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/avro-1.7.4.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/azure-keyvault-core-0.8.0.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/azure-storage-5.4.0.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/json-smart-1.3.1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/javassist-3.18.1-GA.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jsp-api-2.1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/javax.inject-1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/commons-collections-3.2.2.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jsr305-3.0.0.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/commons-configuration-1.6.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/commons-digester-1.8.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/commons-io-2.4.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/metrics-core-3.0.1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/commons-lang3-3.4.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/netty-3.6.2.Final.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/nimbus-jose-jwt-4.41.1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/commons-math3-3.1.1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/commons-net-3.1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/objenesis-2.1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/curator-client-2.7.1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/paranamer-2.3.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/curator-framework-2.7.1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/curator-recipes-2.7.1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/fst-2.24.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/gson-2.2.4.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/guava-11.0.2.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/guice-3.0.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/httpclient-4.5.2.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/httpcore-4.4.4.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jackson-annotations-2.2.3.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jackson-core-2.2.3.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/xmlenc-0.52.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/xz-1.0.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jackson-databind-2.2.3.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jets3t-0.9.0.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/zookeeper-3.4.6.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jackson-xc-1.9.13.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jcip-annotations-1.0-1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/jettison-1.1.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/zookeeper-3.4.6.2.6.5.0-292-tests.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-api-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-client-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-common-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-registry-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-registry.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-server-common-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-server-tests-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/asm-3.2.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/avro-1.7.4.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/guice-3.0.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/junit-4.11.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/netty-3.6.2.Final.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/xz-1.0.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//activation-1.1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-rumen-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-gridmix-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-sls-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//api-util-1.0.0-M20.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//asm-3.2.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jcip-annotations-1.0-1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//avro-1.7.4.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//azure-keyvault-core-0.8.0.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-sls.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//commons-beanutils-1.7.0.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//commons-beanutils-core-1.8.0.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jettison-1.1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//commons-collections-3.2.2.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hamcrest-core-1.3.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//commons-compress-1.4.1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-tests.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//htrace-core-3.1.0-incubating.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//httpclient-4.5.2.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//httpcore-4.4.4.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jackson-core-asl-1.9.13.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//commons-lang3-3.4.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jackson-jaxrs-1.9.13.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jackson-xc-1.9.13.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//curator-client-2.7.1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//curator-framework-2.7.1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//curator-recipes-2.7.1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//gson-2.2.4.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jetty-util-6.1.26.hwx.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-ant-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jetty-6.1.26.hwx.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-ant.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//okhttp-2.7.5.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-archives-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jsr305-3.0.0.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-archives.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-auth-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jsch-0.1.54.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-auth.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-datajoin-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//json-smart-1.3.1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-distcp-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-openstack-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-extras-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//junit-4.11.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-extras.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//okio-1.6.0.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-openstack.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.7.3.2.6.5.0-292-tests.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//hadoop-streaming-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//jetty-sslengine-6.1.26.hwx.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//metrics-core-3.0.1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//mockito-all-1.8.5.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//netty-3.6.2.Final.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//nimbus-jose-jwt-4.41.1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//snappy-java-1.0.4.1.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//xz-1.0.jar:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//zookeeper-3.4.6.2.6.5.0-292.jar::/usr/hdp/2.6.5.0-292/tez/tez-api-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-common-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-dag-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-examples-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-history-parser-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-job-analyzer-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-mapreduce-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-runtime-internals-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-runtime-library-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-tests-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-yarn-timeline-cache-plugin-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-yarn-timeline-history-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-yarn-timeline-history-with-acls-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-yarn-timeline-history-with-fs-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/azure-data-lake-store-sdk-2.1.4.jar:/usr/hdp/2.6.5.0-292/tez/lib/commons-cli-1.2.jar:/usr/hdp/2.6.5.0-292/tez/lib/commons-codec-1.4.jar:/usr/hdp/2.6.5.0-292/tez/lib/commons-collections-3.2.2.jar:/usr/hdp/2.6.5.0-292/tez/lib/commons-collections4-4.1.jar:/usr/hdp/2.6.5.0-292/tez/lib/commons-io-2.4.jar:/usr/hdp/2.6.5.0-292/tez/lib/commons-lang-2.6.jar:/usr/hdp/2.6.5.0-292/tez/lib/commons-math3-3.1.1.jar:/usr/hdp/2.6.5.0-292/tez/lib/gcs-connector-1.8.1.2.6.5.0-292-shaded.jar:/usr/hdp/2.6.5.0-292/tez/lib/guava-11.0.2.jar:/usr/hdp/2.6.5.0-292/tez/lib/hadoop-annotations-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/hadoop-aws-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/hadoop-azure-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/hadoop-azure-datalake-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/hadoop-mapreduce-client-common-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/hadoop-mapreduce-client-core-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/hadoop-yarn-server-timeline-pluginstorage-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/hadoop-yarn-server-web-proxy-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/jersey-client-1.9.jar:/usr/hdp/2.6.5.0-292/tez/lib/jersey-json-1.9.jar:/usr/hdp/2.6.5.0-292/tez/lib/jettison-1.3.4.jar:/usr/hdp/2.6.5.0-292/tez/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.6.5.0-292/tez/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.6.5.0-292/tez/lib/jsr305-2.0.3.jar:/usr/hdp/2.6.5.0-292/tez/lib/metrics-core-3.1.0.jar:/usr/hdp/2.6.5.0-292/tez/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.6.5.0-292/tez/lib/servlet-api-2.5.jar:/usr/hdp/2.6.5.0-292/tez/lib/slf4j-api-1.7.5.jar:/usr/hdp/2.6.5.0-292/tez/conf:/usr/hdp/2.6.5.0-292/tez/tez-api-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-common-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-dag-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-examples-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-history-parser-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-job-analyzer-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-mapreduce-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-runtime-internals-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-runtime-library-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-tests-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-yarn-timeline-cache-plugin-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-yarn-timeline-history-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-yarn-timeline-history-with-acls-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/tez-yarn-timeline-history-with-fs-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/azure-data-lake-store-sdk-2.1.4.jar:/usr/hdp/2.6.5.0-292/tez/lib/commons-cli-1.2.jar:/usr/hdp/2.6.5.0-292/tez/lib/commons-codec-1.4.jar:/usr/hdp/2.6.5.0-292/tez/lib/commons-collections-3.2.2.jar:/usr/hdp/2.6.5.0-292/tez/lib/commons-collections4-4.1.jar:/usr/hdp/2.6.5.0-292/tez/lib/commons-io-2.4.jar:/usr/hdp/2.6.5.0-292/tez/lib/commons-lang-2.6.jar:/usr/hdp/2.6.5.0-292/tez/lib/commons-math3-3.1.1.jar:/usr/hdp/2.6.5.0-292/tez/lib/gcs-connector-1.8.1.2.6.5.0-292-shaded.jar:/usr/hdp/2.6.5.0-292/tez/lib/guava-11.0.2.jar:/usr/hdp/2.6.5.0-292/tez/lib/hadoop-annotations-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/hadoop-aws-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/hadoop-azure-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/hadoop-azure-datalake-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/hadoop-mapreduce-client-common-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/hadoop-mapreduce-client-core-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/hadoop-yarn-server-timeline-pluginstorage-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/hadoop-yarn-server-web-proxy-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/tez/lib/jersey-client-1.9.jar:/usr/hdp/2.6.5.0-292/tez/lib/jersey-json-1.9.jar:/usr/hdp/2.6.5.0-292/tez/lib/jettison-1.3.4.jar:/usr/hdp/2.6.5.0-292/tez/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.6.5.0-292/tez/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.6.5.0-292/tez/lib/jsr305-2.0.3.jar:/usr/hdp/2.6.5.0-292/tez/lib/metrics-core-3.1.0.jar:/usr/hdp/2.6.5.0-292/tez/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.6.5.0-292/tez/lib/servlet-api-2.5.jar:/usr/hdp/2.6.5.0-292/tez/lib/slf4j-api-1.7.5.jar:/usr/hdp/2.6.5.0-292/tez/conf
STARTUP_MSG: build = git@github.com:hortonworks/hadoop.git -r 3091053c59a62c82d82c9f778c48bde5ef0a89a1; compiled by 'jenkins' on 2018-05-11T07:53Z
STARTUP_MSG: java = 1.8.0_191
************************************************************/
2018-10-29 16:26:38,360 INFO tools.DFSZKFailoverController (LogAdapter.java:info(45)) - registered UNIX signal handlers for [TERM, HUP, INT]
2018-10-29 16:26:38,585 INFO tools.DFSZKFailoverController (LogAdapter.java:info(45)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DFSZKFailoverController at omiprihdp03ap.mufep.net/10.6.7.23
************************************************************/
==> /var/log/hadoop/hdfs/gc.log-201810241610 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(167802056k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-24T16:10:41.265+0200: 1.328: [GC (Allocation Failure) 2018-10-24T16:10:41.265+0200: 1.328: [ParNew: 209792K->24590K(235968K), 0.1116302 secs] 209792K->40976K(2070976K), 0.1117583 secs] [Times: user=0.88 sys=0.02, real=0.11 secs]
2018-10-24T16:11:43.354+0200: 63.417: [GC (CMS Initial Mark) [1 CMS-initial-mark: 16386K(1835008K)] 84338K(2070976K), 0.0077304 secs] [Times: user=0.03 sys=0.01, real=0.01 secs]
2018-10-24T16:11:43.362+0200: 63.425: [CMS-concurrent-mark-start]
2018-10-24T16:11:43.366+0200: 63.429: [CMS-concurrent-mark: 0.004/0.004 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2018-10-24T16:11:43.366+0200: 63.429: [CMS-concurrent-preclean-start]
2018-10-24T16:11:43.368+0200: 63.431: [CMS-concurrent-preclean: 0.002/0.002 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2018-10-24T16:11:43.368+0200: 63.431: [CMS-concurrent-abortable-preclean-start]
CMS: abort preclean due to time 2018-10-24T16:11:48.437+0200: 68.500: [CMS-concurrent-abortable-preclean: 1.257/5.069 secs] [Times: user=1.26 sys=0.00, real=5.07 secs]
2018-10-24T16:11:48.438+0200: 68.501: [GC (CMS Final Remark) [YG occupancy: 67952 K (235968 K)]2018-10-24T16:11:48.438+0200: 68.501: [Rescan (parallel) , 0.0075482 secs]2018-10-24T16:11:48.445+0200: 68.508: [weak refs processing, 0.0000230 secs]2018-10-24T16:11:48.445+0200: 68.508: [class unloading, 0.0031608 secs]2018-10-24T16:11:48.448+0200: 68.511: [scrub symbol table, 0.0040634 secs]2018-10-24T16:11:48.452+0200: 68.515: [scrub string table, 0.0003496 secs][1 CMS-remark: 16386K(1835008K)] 84338K(2070976K), 0.0157823 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
2018-10-24T16:11:48.454+0200: 68.517: [CMS-concurrent-sweep-start]
2018-10-24T16:11:48.455+0200: 68.518: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2018-10-24T16:11:48.455+0200: 68.518: [CMS-concurrent-reset-start]
2018-10-24T16:11:48.463+0200: 68.526: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00 sys=0.01, real=0.01 secs]
Heap
par new generation total 235968K, used 145259K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 57% used [0x0000000080000000, 0x00000000875d7378, 0x000000008cce0000)
from space 26176K, 93% used [0x000000008e670000, 0x000000008fe73bb0, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 16386K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21658K, capacity 21916K, committed 22140K, reserved 1069056K
class space used 2405K, capacity 2456K, committed 2508K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291249 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184383344k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-29T12:49:16.103+0200: 1.239: [GC (Allocation Failure) 2018-10-29T12:49:16.104+0200: 1.239: [ParNew: 209792K->14140K(235968K), 0.0116747 secs] 209792K->14140K(2070976K), 0.0123039 secs] [Times: user=0.06 sys=0.00, real=0.01 secs]
Heap
par new generation total 235968K, used 98503K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 40% used [0x0000000080000000, 0x0000000085262d80, 0x000000008cce0000)
from space 26176K, 54% used [0x000000008e670000, 0x000000008f43f0a8, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21420K, capacity 21686K, committed 21960K, reserved 1069056K
class space used 2436K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291304 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184374808k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-29T13:04:04.008+0200: 1.269: [GC (Allocation Failure) 2018-10-29T13:04:04.008+0200: 1.269: [ParNew: 209792K->14150K(235968K), 0.0130799 secs] 209792K->14150K(2070976K), 0.0132757 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
Heap
par new generation total 235968K, used 96416K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 39% used [0x0000000080000000, 0x0000000085056940, 0x000000008cce0000)
from space 26176K, 54% used [0x000000008e670000, 0x000000008f4419d8, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21423K, capacity 21686K, committed 21960K, reserved 1069056K
class space used 2436K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291406 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184367832k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-29T14:06:42.892+0200: 1.229: [GC (Allocation Failure) 2018-10-29T14:06:42.892+0200: 1.230: [ParNew: 209792K->14150K(235968K), 0.0118410 secs] 209792K->14150K(2070976K), 0.0124932 secs] [Times: user=0.04 sys=0.01, real=0.01 secs]
Heap
par new generation total 235968K, used 98504K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 40% used [0x0000000080000000, 0x0000000085260788, 0x000000008cce0000)
from space 26176K, 54% used [0x000000008e670000, 0x000000008f441b80, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21418K, capacity 21686K, committed 21960K, reserved 1069056K
class space used 2436K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-omiprihdp03ap.mufep.net.out.2 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 768541
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201810241655 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(167319644k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-24T16:55:52.756+0200: 1.334: [GC (Allocation Failure) 2018-10-24T16:55:52.756+0200: 1.334: [ParNew: 209792K->24951K(235968K), 0.1101571 secs] 209792K->41337K(2070976K), 0.1102825 secs] [Times: user=0.88 sys=0.02, real=0.11 secs]
2018-10-24T16:56:54.843+0200: 63.421: [GC (CMS Initial Mark) [1 CMS-initial-mark: 16386K(1835008K)] 88644K(2070976K), 0.0076252 secs] [Times: user=0.03 sys=0.00, real=0.01 secs]
2018-10-24T16:56:54.851+0200: 63.429: [CMS-concurrent-mark-start]
2018-10-24T16:56:54.855+0200: 63.432: [CMS-concurrent-mark: 0.004/0.004 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2018-10-24T16:56:54.855+0200: 63.432: [CMS-concurrent-preclean-start]
2018-10-24T16:56:54.857+0200: 63.435: [CMS-concurrent-preclean: 0.002/0.002 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2018-10-24T16:56:54.857+0200: 63.435: [CMS-concurrent-abortable-preclean-start]
CMS: abort preclean due to time 2018-10-24T16:56:59.948+0200: 68.526: [CMS-concurrent-abortable-preclean: 1.382/5.091 secs] [Times: user=1.38 sys=0.00, real=5.09 secs]
2018-10-24T16:56:59.948+0200: 68.526: [GC (CMS Final Remark) [YG occupancy: 72257 K (235968 K)]2018-10-24T16:56:59.948+0200: 68.526: [Rescan (parallel) , 0.0077662 secs]2018-10-24T16:56:59.956+0200: 68.534: [weak refs processing, 0.0000287 secs]2018-10-24T16:56:59.956+0200: 68.534: [class unloading, 0.0035862 secs]2018-10-24T16:56:59.960+0200: 68.538: [scrub symbol table, 0.0051221 secs]2018-10-24T16:56:59.965+0200: 68.543: [scrub string table, 0.0004291 secs][1 CMS-remark: 16386K(1835008K)] 88644K(2070976K), 0.0176956 secs] [Times: user=0.07 sys=0.00, real=0.02 secs]
2018-10-24T16:56:59.966+0200: 68.544: [CMS-concurrent-sweep-start]
2018-10-24T16:56:59.967+0200: 68.545: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2018-10-24T16:56:59.967+0200: 68.545: [CMS-concurrent-reset-start]
2018-10-24T16:56:59.975+0200: 68.553: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00 sys=0.01, real=0.01 secs]
2018-10-24T17:54:53.950+0200: 3542.527: [GC (Allocation Failure) 2018-10-24T17:54:53.950+0200: 3542.528: [ParNew: 234743K->7941K(235968K), 0.0690527 secs] 251129K->40213K(2070976K), 0.0691659 secs] [Times: user=0.44 sys=0.05, real=0.07 secs]
2018-10-24T20:53:55.037+0200: 14283.615: [GC (Allocation Failure) 2018-10-24T20:53:55.037+0200: 14283.615: [ParNew: 217733K->2849K(235968K), 0.0067938 secs] 250005K->35121K(2070976K), 0.0068772 secs] [Times: user=0.04 sys=0.00, real=0.01 secs]
Heap
par new generation total 235968K, used 96286K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 44% used [0x0000000080000000, 0x0000000085b3f620, 0x000000008cce0000)
from space 26176K, 10% used [0x000000008e670000, 0x000000008e9384e8, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 32272K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 26269K, capacity 26592K, committed 26952K, reserved 1073152K
class space used 2758K, capacity 2842K, committed 2944K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291444 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184357828k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-29T14:44:59.307+0200: 1.236: [GC (Allocation Failure) 2018-10-29T14:44:59.308+0200: 1.236: [ParNew: 209792K->14148K(235968K), 0.0120651 secs] 209792K->14148K(2070976K), 0.0127173 secs] [Times: user=0.06 sys=0.00, real=0.01 secs]
Heap
par new generation total 235968K, used 98511K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 40% used [0x0000000080000000, 0x0000000085262cd8, 0x000000008cce0000)
from space 26176K, 54% used [0x000000008e670000, 0x000000008f441280, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21424K, capacity 21686K, committed 21960K, reserved 1069056K
class space used 2436K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-omiprihdp03ap.mufep.net.out.4 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 768541
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201810291313 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184374180k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-29T13:13:31.118+0200: 1.259: [GC (Allocation Failure) 2018-10-29T13:13:31.119+0200: 1.259: [ParNew: 209792K->14143K(235968K), 0.0125834 secs] 209792K->14143K(2070976K), 0.0127867 secs] [Times: user=0.05 sys=0.00, real=0.02 secs]
Heap
par new generation total 235968K, used 96409K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 39% used [0x0000000080000000, 0x0000000085056878, 0x000000008cce0000)
from space 26176K, 54% used [0x000000008e670000, 0x000000008f43fd18, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21423K, capacity 21686K, committed 21960K, reserved 1069056K
class space used 2436K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291440 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184353096k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-29T14:40:50.424+0200: 1.274: [GC (Allocation Failure) 2018-10-29T14:40:50.424+0200: 1.274: [ParNew: 209792K->14147K(235968K), 0.0122114 secs] 209792K->14147K(2070976K), 0.0124318 secs] [Times: user=0.05 sys=0.01, real=0.01 secs]
Heap
par new generation total 235968K, used 98510K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 40% used [0x0000000080000000, 0x0000000085262d40, 0x000000008cce0000)
from space 26176K, 54% used [0x000000008e670000, 0x000000008f440d28, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21420K, capacity 21686K, committed 21960K, reserved 1069056K
class space used 2440K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291521 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184339408k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-29T15:21:20.693+0200: 1.242: [GC (Allocation Failure) 2018-10-29T15:21:20.693+0200: 1.243: [ParNew: 209792K->14141K(235968K), 0.0123566 secs] 209792K->14141K(2070976K), 0.0132286 secs] [Times: user=0.04 sys=0.02, real=0.01 secs]
Heap
par new generation total 235968K, used 98504K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 40% used [0x0000000080000000, 0x0000000085262de8, 0x000000008cce0000)
from space 26176K, 54% used [0x000000008e670000, 0x000000008f43f418, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21420K, capacity 21686K, committed 21960K, reserved 1069056K
class space used 2440K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810242222 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(165169792k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-24T22:22:18.570+0200: 1.304: [GC (Allocation Failure) 2018-10-24T22:22:18.570+0200: 1.304: [ParNew: 209792K->24934K(235968K), 0.0872132 secs] 209792K->41320K(2070976K), 0.0873404 secs] [Times: user=0.67 sys=0.02, real=0.09 secs]
2018-10-24T22:23:20.636+0200: 63.370: [GC (CMS Initial Mark) [1 CMS-initial-mark: 16386K(1835008K)] 163007K(2070976K), 0.0111701 secs] [Times: user=0.07 sys=0.00, real=0.01 secs]
2018-10-24T22:23:20.648+0200: 63.381: [CMS-concurrent-mark-start]
2018-10-24T22:23:20.651+0200: 63.385: [CMS-concurrent-mark: 0.003/0.003 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
2018-10-24T22:23:20.651+0200: 63.385: [CMS-concurrent-preclean-start]
2018-10-24T22:23:20.656+0200: 63.390: [CMS-concurrent-preclean: 0.005/0.005 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2018-10-24T22:23:20.656+0200: 63.390: [CMS-concurrent-abortable-preclean-start]
CMS: abort preclean due to time 2018-10-24T22:23:25.736+0200: 68.469: [CMS-concurrent-abortable-preclean: 1.325/5.079 secs] [Times: user=1.33 sys=0.00, real=5.08 secs]
2018-10-24T22:23:25.736+0200: 68.470: [GC (CMS Final Remark) [YG occupancy: 147320 K (235968 K)]2018-10-24T22:23:25.736+0200: 68.470: [Rescan (parallel) , 0.0113383 secs]2018-10-24T22:23:25.747+0200: 68.481: [weak refs processing, 0.0000283 secs]2018-10-24T22:23:25.747+0200: 68.481: [class unloading, 0.0044830 secs]2018-10-24T22:23:25.752+0200: 68.486: [scrub symbol table, 0.0055488 secs]2018-10-24T22:23:25.757+0200: 68.491: [scrub string table, 0.0004413 secs][1 CMS-remark: 16386K(1835008K)] 163706K(2070976K), 0.0229673 secs] [Times: user=0.10 sys=0.00, real=0.02 secs]
2018-10-24T22:23:25.759+0200: 68.493: [CMS-concurrent-sweep-start]
2018-10-24T22:23:25.760+0200: 68.493: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2018-10-24T22:23:25.760+0200: 68.493: [CMS-concurrent-reset-start]
2018-10-24T22:23:25.768+0200: 68.502: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00 sys=0.01, real=0.01 secs]
2018-10-24T23:23:19.714+0200: 3662.447: [GC (Allocation Failure) 2018-10-24T23:23:19.714+0200: 3662.448: [ParNew: 234726K->7900K(235968K), 0.0647148 secs] 251112K->40176K(2070976K), 0.0648297 secs] [Times: user=0.38 sys=0.07, real=0.06 secs]
2018-10-25T02:20:20.884+0200: 14283.617: [GC (Allocation Failure) 2018-10-25T02:20:20.884+0200: 14283.617: [ParNew: 217692K->2674K(235968K), 0.0091647 secs] 249968K->34949K(2070976K), 0.0092651 secs] [Times: user=0.06 sys=0.01, real=0.01 secs]
2018-10-25T05:06:22.016+0200: 24244.750: [GC (Allocation Failure) 2018-10-25T05:06:22.016+0200: 24244.750: [ParNew: 212466K->2441K(235968K), 0.0073466 secs] 244741K->34717K(2070976K), 0.0074268 secs] [Times: user=0.03 sys=0.01, real=0.01 secs]
2018-10-25T09:35:23.400+0200: 40386.134: [GC (Allocation Failure) 2018-10-25T09:35:23.400+0200: 40386.134: [ParNew: 212233K->2534K(235968K), 0.0068900 secs] 244509K->34809K(2070976K), 0.0069641 secs] [Times: user=0.04 sys=0.00, real=0.01 secs]
Heap
par new generation total 235968K, used 99362K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 46% used [0x0000000080000000, 0x0000000085e8f098, 0x000000008cce0000)
from space 26176K, 9% used [0x000000008e670000, 0x000000008e8e99e8, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 32275K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 26578K, capacity 26848K, committed 27224K, reserved 1073152K
class space used 2754K, capacity 2842K, committed 2916K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-secondarynamenode-omiprihdp03ap.mufep.net.out.4 <==
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 768541
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-secondarynamenode-omiprihdp03ap.mufep.net.out.3 <==
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 768541
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-secondarynamenode-omiprihdp03ap.mufep.net.out.2 <==
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RetriableException): NameNode still not started
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.checkNNStartup(NameNodeRpcServer.java:2082)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getTransactionID(NameNodeRpcServer.java:1229)
at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.getTransactionId(NamenodeProtocolServerSideTranslatorPB.java:118)
at org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:12832)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1554)
at org.apache.hadoop.ipc.Client.call(Client.java:1498)
at org.apache.hadoop.ipc.Client.call(Client.java:1398)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy10.getTransactionId(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.getTransactionID(NamenodeProtocolTranslatorPB.java:130)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:290)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:202)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:184)
at com.sun.proxy.$Proxy11.getTransactionID(Unknown Source)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.countUncheckpointedTxns(SecondaryNameNode.java:651)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.shouldCheckpointBasedOnCount(SecondaryNameNode.java:659)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:403)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$1.run(SecondaryNameNode.java:371)
at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:476)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:367)
at java.lang.Thread.run(Thread.java:748)
==> /var/log/hadoop/hdfs/hadoop-hdfs-secondarynamenode-omiprihdp03ap.mufep.net.out.1 <==
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 768541
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-secondarynamenode-omiprihdp03ap.mufep.net.out <==
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:405)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$1.run(SecondaryNameNode.java:371)
at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:476)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:367)
at java.lang.Thread.run(Thread.java:748)
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Log not rolled. Name node is in safe mode.
It was turned on manually. Use "hdfs dfsadmin -safemode leave" to turn safe mode off.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1422)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:6309)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEditLog(NameNodeRpcServer.java:1247)
at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.rollEditLog(NamenodeProtocolServerSideTranslatorPB.java:144)
at org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:12836)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1554)
at org.apache.hadoop.ipc.Client.call(Client.java:1498)
at org.apache.hadoop.ipc.Client.call(Client.java:1398)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy10.rollEditLog(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:150)
at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:290)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:202)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:184)
at com.sun.proxy.$Proxy11.rollEditLog(Unknown Source)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:522)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:405)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$1.run(SecondaryNameNode.java:371)
at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:476)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:367)
at java.lang.Thread.run(Thread.java:748)
==> /var/log/hadoop/hdfs/gc.log-201810251035 <==
2018-10-27T04:12:41.342+0200: 149836.581: [GC (Allocation Failure) 2018-10-27T04:12:41.342+0200: 149836.581: [ParNew: 210525K->605K(235968K), 0.0069253 secs] 244014K->34098K(2070976K), 0.0070102 secs] [Times: user=0.03 sys=0.01, real=0.00 secs]
2018-10-27T05:50:41.987+0200: 155717.225: [GC (Allocation Failure) 2018-10-27T05:50:41.987+0200: 155717.225: [ParNew: 210397K->665K(235968K), 0.0053941 secs] 243890K->34160K(2070976K), 0.0054826 secs] [Times: user=0.04 sys=0.00, real=0.01 secs]
2018-10-27T07:36:42.535+0200: 162077.773: [GC (Allocation Failure) 2018-10-27T07:36:42.535+0200: 162077.773: [ParNew: 210457K->684K(235968K), 0.0060738 secs] 243952K->34179K(2070976K), 0.0061472 secs] [Times: user=0.04 sys=0.01, real=0.01 secs]
2018-10-27T09:38:42.526+0200: 169397.765: [GC (Allocation Failure) 2018-10-27T09:38:42.526+0200: 169397.765: [ParNew: 210476K->654K(235968K), 0.0062458 secs] 243971K->34179K(2070976K), 0.0063217 secs] [Times: user=0.03 sys=0.01, real=0.01 secs]
2018-10-27T10:52:43.534+0200: 173838.772: [GC (Allocation Failure) 2018-10-27T10:52:43.534+0200: 173838.773: [ParNew: 210446K->449K(235968K), 0.0052087 secs] 243971K->34114K(2070976K), 0.0052878 secs] [Times: user=0.03 sys=0.00, real=0.00 secs]
2018-10-27T12:52:44.148+0200: 181039.386: [GC (Allocation Failure) 2018-10-27T12:52:44.148+0200: 181039.386: [ParNew: 210241K->584K(235968K), 0.0064877 secs] 243906K->34249K(2070976K), 0.0065748 secs] [Times: user=0.03 sys=0.01, real=0.01 secs]
2018-10-27T14:53:44.821+0200: 188300.060: [GC (Allocation Failure) 2018-10-27T14:53:44.822+0200: 188300.060: [ParNew: 210376K->765K(235968K), 0.0064116 secs] 244041K->34430K(2070976K), 0.0064888 secs] [Times: user=0.03 sys=0.01, real=0.00 secs]
2018-10-27T16:31:45.355+0200: 194180.593: [GC (Allocation Failure) 2018-10-27T16:31:45.355+0200: 194180.593: [ParNew: 210557K->594K(235968K), 0.0061077 secs] 244222K->34324K(2070976K), 0.0062066 secs] [Times: user=0.03 sys=0.01, real=0.01 secs]
2018-10-27T18:07:45.924+0200: 199941.163: [GC (Allocation Failure) 2018-10-27T18:07:45.924+0200: 199941.163: [ParNew: 210386K->670K(235968K), 0.0047476 secs] 244116K->34409K(2070976K), 0.0048352 secs] [Times: user=0.03 sys=0.00, real=0.00 secs]
2018-10-27T19:46:42.537+0200: 205877.775: [GC (Allocation Failure) 2018-10-27T19:46:42.537+0200: 205877.775: [ParNew: 210462K->707K(235968K), 0.0071229 secs] 244201K->34447K(2070976K), 0.0072222 secs] [Times: user=0.05 sys=0.00, real=0.01 secs]
2018-10-27T21:24:42.521+0200: 211757.759: [GC (Allocation Failure) 2018-10-27T21:24:42.521+0200: 211757.759: [ParNew: 210499K->542K(235968K), 0.0057302 secs] 244239K->34349K(2070976K), 0.0058097 secs] [Times: user=0.02 sys=0.01, real=0.01 secs]
2018-10-27T22:38:47.452+0200: 216202.690: [GC (Allocation Failure) 2018-10-27T22:38:47.452+0200: 216202.690: [ParNew: 210334K->617K(235968K), 0.0067979 secs] 244141K->34424K(2070976K), 0.0068865 secs] [Times: user=0.04 sys=0.01, real=0.00 secs]
2018-10-28T00:39:42.520+0200: 223457.759: [GC (Allocation Failure) 2018-10-28T00:39:42.520+0200: 223457.759: [ParNew: 210409K->700K(235968K), 0.0068970 secs] 244216K->34507K(2070976K), 0.0069853 secs] [Times: user=0.04 sys=0.00, real=0.01 secs]
2018-10-28T02:41:58.556+0200: 230793.794: [GC (Allocation Failure) 2018-10-28T02:41:58.556+0200: 230793.794: [ParNew: 210492K->755K(235968K), 0.0065499 secs] 244299K->34564K(2070976K), 0.0066532 secs] [Times: user=0.04 sys=0.00, real=0.01 secs]
2018-10-28T04:36:49.325+0200: 237684.564: [GC (Allocation Failure) 2018-10-28T04:36:49.325+0200: 237684.564: [ParNew: 210547K->759K(235968K), 0.0048834 secs] 244356K->34634K(2070976K), 0.0049761 secs] [Times: user=0.03 sys=0.00, real=0.00 secs]
2018-10-28T06:36:50.000+0200: 244885.238: [GC (Allocation Failure) 2018-10-28T06:36:50.000+0200: 244885.238: [ParNew: 210551K->713K(235968K), 0.0070871 secs] 244426K->34589K(2070976K), 0.0071777 secs] [Times: user=0.05 sys=0.01, real=0.01 secs]
2018-10-28T08:39:42.527+0200: 252257.765: [GC (Allocation Failure) 2018-10-28T08:39:42.527+0200: 252257.765: [ParNew: 210505K->751K(235968K), 0.0059436 secs] 244381K->34627K(2070976K), 0.0060241 secs] [Times: user=0.03 sys=0.00, real=0.01 secs]
2018-10-28T10:36:51.347+0200: 259286.585: [GC (Allocation Failure) 2018-10-28T10:36:51.347+0200: 259286.585: [ParNew: 210543K->662K(235968K), 0.0059278 secs] 244419K->34601K(2070976K), 0.0060151 secs] [Times: user=0.03 sys=0.00, real=0.01 secs]
2018-10-28T12:12:51.913+0200: 265047.151: [GC (Allocation Failure) 2018-10-28T12:12:51.913+0200: 265047.152: [ParNew: 210454K->648K(235968K), 0.0062106 secs] 244393K->34591K(2070976K), 0.0062920 secs] [Times: user=0.04 sys=0.00, real=0.00 secs]
2018-10-28T14:14:52.551+0200: 272367.789: [GC (Allocation Failure) 2018-10-28T14:14:52.551+0200: 272367.789: [ParNew: 210440K->773K(235968K), 0.0048926 secs] 244383K->34716K(2070976K), 0.0049833 secs] [Times: user=0.03 sys=0.00, real=0.00 secs]
2018-10-28T15:52:57.529+0200: 278252.768: [GC (Allocation Failure) 2018-10-28T15:52:57.529+0200: 278252.768: [ParNew: 210565K->483K(235968K), 0.0061652 secs] 244508K->34573K(2070976K), 0.0063145 secs] [Times: user=0.03 sys=0.01, real=0.01 secs]
2018-10-28T17:31:53.681+0200: 284188.920: [GC (Allocation Failure) 2018-10-28T17:31:53.682+0200: 284188.920: [ParNew: 210275K->535K(235968K), 0.0051891 secs] 244365K->34636K(2070976K), 0.0052850 secs] [Times: user=0.03 sys=0.00, real=0.00 secs]
2018-10-28T19:09:54.206+0200: 290069.445: [GC (Allocation Failure) 2018-10-28T19:09:54.206+0200: 290069.445: [ParNew: 210327K->560K(235968K), 0.0045909 secs] 244428K->34662K(2070976K), 0.0046698 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
2018-10-28T21:10:54.829+0200: 297330.067: [GC (Allocation Failure) 2018-10-28T21:10:54.829+0200: 297330.067: [ParNew: 210352K->500K(235968K), 0.0055804 secs] 244454K->34602K(2070976K), 0.0056861 secs] [Times: user=0.04 sys=0.00, real=0.01 secs]
2018-10-28T22:36:55.350+0200: 302490.589: [GC (Allocation Failure) 2018-10-28T22:36:55.350+0200: 302490.589: [ParNew: 210292K->532K(235968K), 0.0048859 secs] 244394K->34675K(2070976K), 0.0049635 secs] [Times: user=0.03 sys=0.00, real=0.01 secs]
2018-10-29T00:21:55.991+0200: 308791.229: [GC (Allocation Failure) 2018-10-29T00:21:55.991+0200: 308791.229: [ParNew: 210324K->589K(235968K), 0.0054767 secs] 244467K->34731K(2070976K), 0.0055682 secs] [Times: user=0.03 sys=0.01, real=0.00 secs]
2018-10-29T02:23:56.643+0200: 316111.881: [GC (Allocation Failure) 2018-10-29T02:23:56.643+0200: 316111.881: [ParNew: 210381K->585K(235968K), 0.0061674 secs] 244523K->34728K(2070976K), 0.0062612 secs] [Times: user=0.03 sys=0.01, real=0.00 secs]
2018-10-29T04:07:42.512+0200: 322337.750: [GC (Allocation Failure) 2018-10-29T04:07:42.512+0200: 322337.750: [ParNew: 210377K->503K(235968K), 0.0042246 secs] 244520K->34707K(2070976K), 0.0042889 secs] [Times: user=0.03 sys=0.00, real=0.00 secs]
2018-10-29T05:27:57.787+0200: 327153.025: [GC (Allocation Failure) 2018-10-29T05:27:57.787+0200: 327153.025: [ParNew: 210295K->495K(235968K), 0.0061271 secs] 244499K->34700K(2070976K), 0.0062285 secs] [Times: user=0.03 sys=0.02, real=0.01 secs]
2018-10-29T07:32:58.509+0200: 334653.748: [GC (Allocation Failure) 2018-10-29T07:32:58.509+0200: 334653.748: [ParNew: 210287K->625K(235968K), 0.0061059 secs] 244492K->34829K(2070976K), 0.0061912 secs] [Times: user=0.03 sys=0.01, real=0.01 secs]
2018-10-29T09:09:59.034+0200: 340474.272: [GC (Allocation Failure) 2018-10-29T09:09:59.034+0200: 340474.272: [ParNew: 210417K->691K(235968K), 0.0064946 secs] 244621K->34917K(2070976K), 0.0066242 secs] [Times: user=0.04 sys=0.00, real=0.00 secs]
2018-10-29T10:45:59.610+0200: 346234.849: [GC (Allocation Failure) 2018-10-29T10:45:59.610+0200: 346234.849: [ParNew: 210483K->598K(235968K), 0.0061810 secs] 244709K->34868K(2070976K), 0.0062636 secs] [Times: user=0.03 sys=0.01, real=0.01 secs]
Heap
par new generation total 235968K, used 145304K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 68% used [0x0000000080000000, 0x0000000088d50a40, 0x000000008cce0000)
from space 26176K, 2% used [0x000000008cce0000, 0x000000008cd75858, 0x000000008e670000)
to space 26176K, 0% used [0x000000008e670000, 0x000000008e670000, 0x0000000090000000)
concurrent mark-sweep generation total 1835008K, used 34270K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 26848K, capacity 27070K, committed 27404K, reserved 1073152K
class space used 2769K, capacity 2847K, committed 2864K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291212 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184492108k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
Heap
par new generation total 235968K, used 92325K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 44% used [0x0000000080000000, 0x0000000085a294a8, 0x000000008cce0000)
from space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
to space 26176K, 0% used [0x000000008e670000, 0x000000008e670000, 0x0000000090000000)
concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 10752K, capacity 10886K, committed 11008K, reserved 1058816K
class space used 1146K, capacity 1221K, committed 1280K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291216 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184465952k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-29T12:16:35.710+0200: 1.255: [GC (Allocation Failure) 2018-10-29T12:16:35.710+0200: 1.255: [ParNew: 209792K->14139K(235968K), 0.0133716 secs] 209792K->14139K(2070976K), 0.0140281 secs] [Times: user=0.04 sys=0.02, real=0.02 secs]
Heap
par new generation total 235968K, used 96404K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 39% used [0x0000000080000000, 0x00000000850565c8, 0x000000008cce0000)
from space 26176K, 54% used [0x000000008e670000, 0x000000008f43ed68, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21417K, capacity 21686K, committed 21960K, reserved 1069056K
class space used 2440K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291332 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184378432k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-29T13:32:34.683+0200: 1.251: [GC (Allocation Failure) 2018-10-29T13:32:34.683+0200: 1.252: [ParNew: 209792K->14143K(235968K), 0.0138683 secs] 209792K->14143K(2070976K), 0.0145293 secs] [Times: user=0.05 sys=0.01, real=0.01 secs]
Heap
par new generation total 235968K, used 96408K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 39% used [0x0000000080000000, 0x0000000085056510, 0x000000008cce0000)
from space 26176K, 54% used [0x000000008e670000, 0x000000008f43fd30, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 21424K, capacity 21686K, committed 21960K, reserved 1069056K
class space used 2436K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-zkfc-omiprihdp03ap.mufep.net.out.3 <==
Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: Could not get the namenode ID of this node. You may run zkfc on the node other than namenode.
at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.create(DFSZKFailoverController.java:136)
at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:187)
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 768541
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-zkfc-omiprihdp03ap.mufep.net.out.2 <==
Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: Could not get the namenode ID of this node. You may run zkfc on the node other than namenode.
at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.create(DFSZKFailoverController.java:136)
at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:187)
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 768541
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-zkfc-omiprihdp03ap.mufep.net.out.1 <==
Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: Could not get the namenode ID of this node. You may run zkfc on the node other than namenode.
at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.create(DFSZKFailoverController.java:136)
at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:187)
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 768541
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-zkfc-omiprihdp03ap.mufep.net.out <==
Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: Could not get the namenode ID of this node. You may run zkfc on the node other than namenode.
at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.create(DFSZKFailoverController.java:136)
at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:187)
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 768541
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-zkfc-omiprihdp03ap.mufep.net.out.5 <==
Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: Could not get the namenode ID of this node. You may run zkfc on the node other than namenode.
at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.create(DFSZKFailoverController.java:136)
at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:187)
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 768541
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Command failed after 1 tries
... View more
Labels:
- Labels:
-
Apache Hadoop
10-29-2018
02:29 PM
Hi Geoffrey - I reinstalled Ambari and HDFS and that fixed the Issue - thank you
... View more
10-23-2018
05:31 AM
starting namenode, logging to /var/log/hadoop/hadoop/hadoop-hadoop-namenode-<fqdn>.out /usr/hdp/2.6.5.0-292/hadoop/sbin/hadoop-daemon.sh: line 171: /var/run/hadoop/hadoop/hadoop-hadoop-namenode.pid: No such file or directory SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. [hadoop@omiprihdp02ap ~]$
... View more
10-23-2018
04:06 AM
thanks Geoffrey I increased the memory but I still get the following error when trying to start the the services; Connection failed to http:IP:8042 (<urlopen error [Errno 111] Connection refused>)
... View more
10-21-2018
10:08 PM
Please advise how I can get this started. 6 Nodes in cluster 1 x Edge, 2 Name, 3 Data stderr: /var/lib/ambari-agent/data/errors-125.txt Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 348, in <module>
NameNode().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 375, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 90, in start
upgrade_suspended=params.upgrade_suspended, env=env)
File "/usr/lib/ambari-agent/lib/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 175, in namenode
create_log_dir=True
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py", line 276, in service
Execute(daemon_cmd, not_if=process_id_exists_command, environment=hadoop_env_exports)
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 262, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/2.6.5.0-292/hadoop/sbin/hadoop-daemon.sh --config /usr/hdp/2.6.5.0-292/hadoop/conf start namenode'' returned 1. starting namenode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-namenode-omiprihdp02ap.mufep.net.out
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. stdout: /var/lib/ambari-agent/data/output-125.txt 2018-10-21 10:07:42,380 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.0-292 -> 2.6.5.0-292
2018-10-21 10:07:42,393 - Using hadoop conf dir: /usr/hdp/2.6.5.0-292/hadoop/conf
2018-10-21 10:07:42,519 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.0-292 -> 2.6.5.0-292
2018-10-21 10:07:42,523 - Using hadoop conf dir: /usr/hdp/2.6.5.0-292/hadoop/conf
2018-10-21 10:07:42,524 - Group['hdfs'] {}
2018-10-21 10:07:42,525 - Group['hadoop'] {}
2018-10-21 10:07:42,525 - Group['users'] {}
2018-10-21 10:07:42,525 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-21 10:07:42,526 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-21 10:07:42,527 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-10-21 10:07:42,527 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None}
2018-10-21 10:07:42,528 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-21 10:07:42,528 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-21 10:07:42,529 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-10-21 10:07:42,530 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2018-10-21 10:07:42,536 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2018-10-21 10:07:42,537 - Group['hdfs'] {}
2018-10-21 10:07:42,537 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', u'hdfs']}
2018-10-21 10:07:42,538 - FS Type:
2018-10-21 10:07:42,538 - Directory['/etc/hadoop'] {'mode': 0755}
2018-10-21 10:07:42,550 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2018-10-21 10:07:42,550 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2018-10-21 10:07:42,563 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2018-10-21 10:07:42,573 - Skipping Execute[('setenforce', '0')] due to not_if
2018-10-21 10:07:42,574 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2018-10-21 10:07:42,576 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2018-10-21 10:07:42,576 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2018-10-21 10:07:42,580 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2018-10-21 10:07:42,582 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2018-10-21 10:07:42,587 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2018-10-21 10:07:42,594 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2018-10-21 10:07:42,595 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2018-10-21 10:07:42,595 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2018-10-21 10:07:42,599 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2018-10-21 10:07:42,603 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2018-10-21 10:07:42,834 - Using hadoop conf dir: /usr/hdp/2.6.5.0-292/hadoop/conf
2018-10-21 10:07:42,834 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.0-292 -> 2.6.5.0-292
2018-10-21 10:07:42,851 - Using hadoop conf dir: /usr/hdp/2.6.5.0-292/hadoop/conf
2018-10-21 10:07:42,863 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2018-10-21 10:07:42,867 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2018-10-21 10:07:42,867 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.5.0-292/hadoop/conf', 'configuration_attributes': {}, 'configurations': ...}
2018-10-21 10:07:42,874 - Generating config: /usr/hdp/2.6.5.0-292/hadoop/conf/hadoop-policy.xml
2018-10-21 10:07:42,874 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2018-10-21 10:07:42,881 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.5.0-292/hadoop/conf', 'configuration_attributes': {}, 'configurations': ...}
2018-10-21 10:07:42,886 - Generating config: /usr/hdp/2.6.5.0-292/hadoop/conf/ssl-client.xml
2018-10-21 10:07:42,886 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2018-10-21 10:07:42,891 - Directory['/usr/hdp/2.6.5.0-292/hadoop/conf/secure'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2018-10-21 10:07:42,891 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.5.0-292/hadoop/conf/secure', 'configuration_attributes': {}, 'configurations': ...}
2018-10-21 10:07:42,897 - Generating config: /usr/hdp/2.6.5.0-292/hadoop/conf/secure/ssl-client.xml
2018-10-21 10:07:42,897 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2018-10-21 10:07:42,901 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.5.0-292/hadoop/conf', 'configuration_attributes': {}, 'configurations': ...}
2018-10-21 10:07:42,907 - Generating config: /usr/hdp/2.6.5.0-292/hadoop/conf/ssl-server.xml
2018-10-21 10:07:42,907 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2018-10-21 10:07:42,912 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.5.0-292/hadoop/conf', 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true'}}, 'configurations': ...}
2018-10-21 10:07:42,918 - Generating config: /usr/hdp/2.6.5.0-292/hadoop/conf/hdfs-site.xml
2018-10-21 10:07:42,918 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2018-10-21 10:07:42,950 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.5.0-292/hadoop/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'hdfs', 'configurations': ...}
2018-10-21 10:07:42,956 - Generating config: /usr/hdp/2.6.5.0-292/hadoop/conf/core-site.xml
2018-10-21 10:07:42,956 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2018-10-21 10:07:42,971 - Writing File['/usr/hdp/2.6.5.0-292/hadoop/conf/core-site.xml'] because contents don't match
2018-10-21 10:07:42,972 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'hdfs'}
2018-10-21 10:07:42,972 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.0-292 -> 2.6.5.0-292
2018-10-21 10:07:42,977 - Directory['/grid/0/hadoop/hdfs/namenode'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2018-10-21 10:07:42,978 - Skipping setting up secure ZNode ACL for HFDS as it's supported only for NameNode HA mode.
2018-10-21 10:07:42,980 - Called service start with upgrade_type: None
2018-10-21 10:07:42,980 - Ranger Hdfs plugin is not enabled
2018-10-21 10:07:42,981 - File['/etc/hadoop/conf/dfs.exclude'] {'owner': 'hdfs', 'content': Template('exclude_hosts_list.j2'), 'group': 'hadoop'}
2018-10-21 10:07:42,982 - Writing File['/etc/hadoop/conf/dfs.exclude'] because it doesn't exist
2018-10-21 10:07:42,982 - Changing owner for /etc/hadoop/conf/dfs.exclude from 0 to hdfs
2018-10-21 10:07:42,982 - Changing group for /etc/hadoop/conf/dfs.exclude from 0 to hadoop
2018-10-21 10:07:42,982 - /grid/0/hadoop/hdfs/namenode/namenode-formatted/ exists. Namenode DFS already formatted
2018-10-21 10:07:42,982 - Directory['/grid/0/hadoop/hdfs/namenode/namenode-formatted/'] {'create_parents': True}
2018-10-21 10:07:42,982 - Options for start command are:
2018-10-21 10:07:42,983 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755}
2018-10-21 10:07:42,983 - Changing owner for /var/run/hadoop from 0 to hdfs
2018-10-21 10:07:42,983 - Changing group for /var/run/hadoop from 0 to hadoop
2018-10-21 10:07:42,983 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2018-10-21 10:07:42,983 - Creating directory Directory['/var/run/hadoop/hdfs'] since it doesn't exist.
2018-10-21 10:07:42,983 - Changing owner for /var/run/hadoop/hdfs from 0 to hdfs
2018-10-21 10:07:42,983 - Changing group for /var/run/hadoop/hdfs from 0 to hadoop
2018-10-21 10:07:42,983 - Directory['/var/log/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2018-10-21 10:07:42,983 - Creating directory Directory['/var/log/hadoop/hdfs'] since it doesn't exist.
2018-10-21 10:07:43,001 - Changing owner for /var/log/hadoop/hdfs from 0 to hdfs
2018-10-21 10:07:43,002 - Changing group for /var/log/hadoop/hdfs from 0 to hadoop
2018-10-21 10:07:43,002 - File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'] {'action': ['delete'], 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'}
2018-10-21 10:07:43,008 - Execute['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/2.6.5.0-292/hadoop/sbin/hadoop-daemon.sh --config /usr/hdp/2.6.5.0-292/hadoop/conf start namenode''] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/2.6.5.0-292/hadoop/libexec'}, 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'}
2018-10-21 10:07:47,161 - Execute['find /var/log/hadoop/hdfs -maxdepth 1 -type f -name '*' -exec echo '==> {} <==' \; -exec tail -n 40 {} \;'] {'logoutput': True, 'ignore_failures': True, 'user': 'hdfs'}
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-omiprihdp02ap.mufep.net.out <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 768540
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201810211007 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct 9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551312k(193685116k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2018-10-21T10:07:44.027+0200: 0.845: [GC (Allocation Failure) 2018-10-21T10:07:44.027+0200: 0.845: [ParNew: 104960K->9369K(118016K), 0.0093381 secs] 104960K->9369K(1035520K), 0.0094477 secs] [Times: user=0.03 sys=0.01, real=0.01 secs]
2018-10-21T10:07:45.458+0200: 2.276: [GC (Allocation Failure) 2018-10-21T10:07:45.458+0200: 2.276: [ParNew: 114329K->7011K(118016K), 0.0160670 secs] 114329K->9901K(1035520K), 0.0161554 secs] [Times: user=0.07 sys=0.01, real=0.01 secs]
Heap
par new generation total 118016K, used 63374K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
eden space 104960K, 53% used [0x00000000c0000000, 0x00000000c370acc8, 0x00000000c6680000)
from space 13056K, 53% used [0x00000000c6680000, 0x00000000c6d58e20, 0x00000000c7340000)
to space 13056K, 0% used [0x00000000c7340000, 0x00000000c7340000, 0x00000000c8000000)
concurrent mark-sweep generation total 917504K, used 2890K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 23023K, capacity 23310K, committed 23544K, reserved 1071104K
class space used 2612K, capacity 2689K, committed 2764K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-omiprihdp02ap.mufep.net.log <==
2018-10-21 10:07:45,785 INFO namenode.FSEditLog (JournalSet.java:selectInputStreams(274)) - Skipping jas JournalAndStream(mgr=FileJournalManager(root=/grid/0/hadoop/hdfs/namenode), stream=null) since it's disabled
2018-10-21 10:07:45,785 WARN namenode.FSNamesystem (FSNamesystem.java:loadFromDisk(726)) - Encountered exception loading fsimage
java.io.IOException: Gap in transactions. Expected to be able to read up until at least txid 90719 but unable to find any edit logs containing txid 90719
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.checkForGaps(FSEditLog.java:1660)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1618)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:661)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:303)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1077)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:724)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:697)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:761)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:1001)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:985)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1710)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1778)
2018-10-21 10:07:45,787 INFO mortbay.log (Slf4jLog.java:info(67)) - Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@omiprihdp02ap.mufep.net:50070
2018-10-21 10:07:45,888 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(211)) - Stopping NameNode metrics system...
2018-10-21 10:07:45,889 INFO impl.MetricsSinkAdapter (MetricsSinkAdapter.java:publishMetricsFromQueue(141)) - timeline thread interrupted.
2018-10-21 10:07:45,890 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(217)) - NameNode metrics system stopped.
2018-10-21 10:07:45,890 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(606)) - NameNode metrics system shutdown complete.
2018-10-21 10:07:45,890 ERROR namenode.NameNode (NameNode.java:main(1783)) - Failed to start namenode.
java.io.IOException: Gap in transactions. Expected to be able to read up until at least txid 90719 but unable to find any edit logs containing txid 90719
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.checkForGaps(FSEditLog.java:1660)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1618)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:661)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:303)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1077)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:724)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:697)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:761)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:1001)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:985)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1710)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1778)
2018-10-21 10:07:45,891 INFO util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1
2018-10-21 10:07:45,893 INFO namenode.NameNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at omiprihdp02ap.mufep.net/10.6.7.22
************************************************************/
2018-10-21 10:07:45,893 INFO timeline.HadoopTimelineMetricsSink (AbstractTimelineMetricsSink.java:getCurrentCollectorHost(278)) - No live collector to send metrics to. Metrics to be sent will be discarded. This message will be skipped for the next 20 times.
==> /var/log/hadoop/hdfs/SecurityAuth.audit <==
==> /var/log/hadoop/hdfs/hdfs-audit.log <==
Command failed after 1 tries When trying to start anything (IPtables, SElinux etc have been disabled) - Connection failed to http:IP:8042 (<urlopen error [Errno 111] Connection refused>) ---I get this error for many ports on all various hosts
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
11-01-2017
07:08 AM
Labels:
- Labels:
-
Cloudera DataFlow (CDF)