Member since
09-28-2017
9
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2307 | 11-24-2017 04:42 AM |
11-24-2017
04:42 AM
@Jay Kumar SenSharma Thanks for your help. It turned out that I had to remove proxy configurations and restart all services to get that to effect rather than just HDFS service restart. Post that, the hdfs read operation started working 🙂
... View more
11-24-2017
04:28 AM
@Jay Kumar SenSharma Thanks for your reply. Yes I have entries in /etc/hosts. All of this was working before I restarted all services of HortonWorks
... View more
11-24-2017
04:17 AM
@Jay Kumar SenSharma request your help on this issue
... View more
11-24-2017
04:17 AM
Hi, @Jay
Kumar SenSharma I'm getting following error when trying to access a file on HDFS. I am able to ping "node1.mydomain" from other machine where this pyspark script is running. File "/opt/<mysoftware>/depLibs/usr/local/lib/python2.7/site-packages/hdfs/client.py", line 44, in _on_error
raise HdfsError(message)
HdfsError: <HTML><HEAD>
<TITLE>Network Error</TITLE>
</HEAD>
<BODY>
<FONT face="Helvetica">
<big><strong></strong></big><BR>
</FONT>
<blockquote>
<TABLE border=0 cellPadding=1 width="80%">
<TR><TD>
<FONT face="Helvetica">
<big>Network Error (dns_unresolved_hostname)</big>
<BR>
<BR>
</FONT>
</TD></TR>
<TR><TD>
<FONT face="Helvetica">
Your requested host "node1.mydomain" could not be resolved by DNS.
</FONT>
</TD></TR>
<TR><TD>
<FONT face="Helvetica">
</FONT>
</TD></TR>
<TR><TD>
<FONT face="Helvetica" SIZE=2>
<BR>
For assistance, contact your network support team.
</FONT>
</TD></TR>
</TABLE>
</blockquote>
</FONT>
</BODY></HTML>
... View more
Labels:
- Labels:
-
Apache Hadoop
09-28-2017
06:14 AM
@Jey SenSharma: Thanks for your help. After setting permission as suggested by you, now we are waiting for namenode to come up. '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://mach.openstacklocal:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.<br> Learned that this wait is natural in namenode start-up as in: Name node start up operations Thanks, KD
... View more
09-28-2017
05:28 AM
Hi All,
We are trying to bring up HDP on 3 nodes. After installation, we found that the name node is failing to start. Following is the log: stderr: /var/lib/ambari-agent/data/errors-362.txt
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 371, in <module>
NameNode().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 104, in start
upgrade_suspended=params.upgrade_suspended, env=env)
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 175, in namenode
create_log_dir=True
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py", line 275, in service
Execute(daemon_cmd, not_if=process_id_exists_command, environment=hadoop_env_exports)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 262, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start namenode'' returned 1. starting namenode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-namenode-mach-vm1.openstacklocal.out
stdout: /var/lib/ambari-agent/data/output-362.txt
2017-09-28 01:02:27,532 - Stack Feature Version Info: Cluster Stack=2.6, Cluster Current Version=2.6.2.0-205, Command Stack=None, Command Version=2.6.2.0-205 -> 2.6.2.0-205
2017-09-28 01:02:27,554 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-09-28 01:02:27,730 - Stack Feature Version Info: Cluster Stack=2.6, Cluster Current Version=2.6.2.0-205, Command Stack=None, Command Version=2.6.2.0-205 -> 2.6.2.0-205
2017-09-28 01:02:27,737 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
User Group mapping (user_group) is missing in the hostLevelParams
2017-09-28 01:02:27,738 - Group['hadoop'] {}
2017-09-28 01:02:27,739 - Group['users'] {}
2017-09-28 01:02:27,739 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-09-28 01:02:27,740 - call['/var/lib/ambari-agent/tmp/changeUid.sh hive'] {}
2017-09-28 01:02:27,755 - call returned (0, '1001')
2017-09-28 01:02:27,756 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1001}
2017-09-28 01:02:27,758 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-09-28 01:02:27,760 - call['/var/lib/ambari-agent/tmp/changeUid.sh zookeeper'] {}
2017-09-28 01:02:27,774 - call returned (0, '1002')
2017-09-28 01:02:27,775 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1002}
2017-09-28 01:02:27,777 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-09-28 01:02:27,779 - call['/var/lib/ambari-agent/tmp/changeUid.sh ams'] {}
2017-09-28 01:02:27,794 - call returned (0, '1003')
2017-09-28 01:02:27,794 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1003}
2017-09-28 01:02:27,796 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2017-09-28 01:02:27,798 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-09-28 01:02:27,800 - call['/var/lib/ambari-agent/tmp/changeUid.sh tez'] {}
2017-09-28 01:02:27,815 - call returned (0, '1005')
2017-09-28 01:02:27,816 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': 1005}
2017-09-28 01:02:27,817 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-09-28 01:02:27,819 - call['/var/lib/ambari-agent/tmp/changeUid.sh hdfs'] {}
2017-09-28 01:02:27,835 - call returned (0, '1006')
2017-09-28 01:02:27,835 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1006}
2017-09-28 01:02:27,837 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-09-28 01:02:27,839 - call['/var/lib/ambari-agent/tmp/changeUid.sh yarn'] {}
2017-09-28 01:02:27,855 - call returned (0, '1007')
2017-09-28 01:02:27,855 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1007}
2017-09-28 01:02:27,857 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-09-28 01:02:27,859 - call['/var/lib/ambari-agent/tmp/changeUid.sh hcat'] {}
2017-09-28 01:02:27,875 - call returned (0, '1008')
2017-09-28 01:02:27,876 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1008}
2017-09-28 01:02:27,878 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-09-28 01:02:27,880 - call['/var/lib/ambari-agent/tmp/changeUid.sh mapred'] {}
2017-09-28 01:02:27,896 - call returned (0, '1009')
2017-09-28 01:02:27,896 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1009}
2017-09-28 01:02:27,898 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-09-28 01:02:27,901 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-09-28 01:02:27,911 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2017-09-28 01:02:27,912 - Group['hdfs'] {}
2017-09-28 01:02:27,913 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2017-09-28 01:02:27,914 - FS Type:
2017-09-28 01:02:27,914 - Directory['/etc/hadoop'] {'mode': 0755}
2017-09-28 01:02:27,937 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-09-28 01:02:27,938 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-09-28 01:02:27,961 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2017-09-28 01:02:27,980 - Skipping Execute[('setenforce', '0')] due to only_if
2017-09-28 01:02:27,981 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2017-09-28 01:02:27,986 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2017-09-28 01:02:27,986 - Changing owner for /var/run/hadoop from 1006 to root
2017-09-28 01:02:27,986 - Changing group for /var/run/hadoop from 1001 to root
2017-09-28 01:02:27,987 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2017-09-28 01:02:27,995 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2017-09-28 01:02:27,999 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2017-09-28 01:02:28,006 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2017-09-28 01:02:28,018 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-09-28 01:02:28,018 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2017-09-28 01:02:28,020 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2017-09-28 01:02:28,025 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2017-09-28 01:02:28,032 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2017-09-28 01:02:28,262 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-09-28 01:02:28,263 - Stack Feature Version Info: Cluster Stack=2.6, Cluster Current Version=2.6.2.0-205, Command Stack=None, Command Version=2.6.2.0-205 -> 2.6.2.0-205
2017-09-28 01:02:28,284 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-09-28 01:02:28,296 - checked_call['rpm -q --queryformat '%{version}-%{release}' hdp-select | sed -e 's/\.el[0-9]//g''] {'stderr': -1}
2017-09-28 01:02:28,342 - checked_call returned (0, '2.6.2.0-205', '')
2017-09-28 01:02:28,355 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2017-09-28 01:02:28,359 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2017-09-28 01:02:28,360 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2017-09-28 01:02:28,368 - Generating config: /usr/hdp/current/hadoop-client/conf/hadoop-policy.xml
2017-09-28 01:02:28,368 - File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-09-28 01:02:28,376 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2017-09-28 01:02:28,382 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-client.xml
2017-09-28 01:02:28,383 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-09-28 01:02:28,389 - Directory['/usr/hdp/current/hadoop-client/conf/secure'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-09-28 01:02:28,389 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf/secure', 'configuration_attributes': {}, 'configurations': ...}
2017-09-28 01:02:28,396 - Generating config: /usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml
2017-09-28 01:02:28,396 - File['/usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-09-28 01:02:28,402 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2017-09-28 01:02:28,413 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-server.xml
2017-09-28 01:02:28,414 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-09-28 01:02:28,422 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true'}}, 'configurations': ...}
2017-09-28 01:02:28,430 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml
2017-09-28 01:02:28,430 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-09-28 01:02:28,481 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'hdfs', 'configurations': ...}
2017-09-28 01:02:28,490 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml
2017-09-28 01:02:28,490 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-09-28 01:02:28,513 - File['/usr/hdp/current/hadoop-client/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'hdfs'}
2017-09-28 01:02:28,518 - Directory['/mach/hadoop/hdfs/namenode'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2017-09-28 01:02:28,518 - Skipping setting up secure ZNode ACL for HFDS as it's supported only for NameNode HA mode.
2017-09-28 01:02:28,522 - Called service start with upgrade_type: None
2017-09-28 01:02:28,522 - Ranger Hdfs plugin is not enabled
2017-09-28 01:02:28,523 - File['/etc/hadoop/conf/dfs.exclude'] {'owner': 'hdfs', 'content': Template('exclude_hosts_list.j2'), 'group': 'hadoop'}
2017-09-28 01:02:28,524 - /mach/hadoop/hdfs/namenode/namenode-formatted/ exists. Namenode DFS already formatted
2017-09-28 01:02:28,524 - Directory['/mach/hadoop/hdfs/namenode/namenode-formatted/'] {'create_parents': True}
2017-09-28 01:02:28,525 - Options for start command are:
2017-09-28 01:02:28,525 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755}
2017-09-28 01:02:28,525 - Changing owner for /var/run/hadoop from 0 to hdfs
2017-09-28 01:02:28,526 - Changing group for /var/run/hadoop from 0 to hadoop
2017-09-28 01:02:28,526 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2017-09-28 01:02:28,526 - Directory['/var/log/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2017-09-28 01:02:28,527 - File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'] {'action': ['delete'], 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'}
2017-09-28 01:02:28,552 - Deleting File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid']
2017-09-28 01:02:28,552 - Execute['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start namenode''] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/current/hadoop-client/libexec'}, 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'}
2017-09-28 01:02:32,767 - Execute['find /var/log/hadoop/hdfs -maxdepth 1 -type f -name '*' -exec echo '==> {} <==' \; -exec tail -n 40 {} \;'] {'logoutput': True, 'ignore_failures': True, 'user': 'hdfs'}
==> /var/log/hadoop/hdfs/gc.log-201709270432 <==
Java HotSpot(TM) 64-Bit Server VM (25.141-b15) for linux-amd64 JRE (1.8.0_141-b15), built on Jul 12 2017 04:21:34 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 16268384k(7527208k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=4294967296 -XX:MaxHeapSize=4294967296 -XX:MaxNewSize=536870912 -XX:MaxTenuringThreshold=6 -XX:NewSize=536870912 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2017-09-27T04:32:49.438-0400: 2.228: [GC (CMS Initial Mark) [1 CMS-initial-mark: 0K(3670016K)] 351235K(4141888K), 0.1695419 secs] [Times: user=0.17 sys=0.00, real=0.17 secs]
2017-09-27T04:32:49.608-0400: 2.398: [CMS-concurrent-mark-start]
2017-09-27T04:32:49.609-0400: 2.399: [CMS-concurrent-mark: 0.002/0.002 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2017-09-27T04:32:49.609-0400: 2.399: [CMS-concurrent-preclean-start]
2017-09-27T04:32:49.621-0400: 2.411: [CMS-concurrent-preclean: 0.012/0.012 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
2017-09-27T04:32:49.621-0400: 2.411: [CMS-concurrent-abortable-preclean-start]
Heap
par new generation total 471872K, used 406178K [0x00000006c0000000, 0x00000006e0000000, 0x00000006e0000000)
eden space 419456K, 96% used [0x00000006c0000000, 0x00000006d8ca8bc0, 0x00000006d99a0000)
from space 52416K, 0% used [0x00000006d99a0000, 0x00000006d99a0000, 0x00000006dccd0000)
to space 52416K, 0% used [0x00000006dccd0000, 0x00000006dccd0000, 0x00000006e0000000)
concurrent mark-sweep generation total 3670016K, used 0K [0x00000006e0000000, 0x00000007c0000000, 0x00000007c0000000)
Metaspace used 23067K, capacity 23416K, committed 23620K, reserved 1071104K
class space used 2761K, capacity 2888K, committed 2916K, reserved 1048576K
2017-09-27T04:33:05.311-0400: 28.798: [GC (Allocation Failure) 2017-09-27T04:33:05.312-0400: 28.798: [ParNew: 176371K->16290K(184320K), 0.0389000 secs] 176371K->20400K(1028096K), 0.0390959 secs] [Times: user=0.12 sys=0.01, real=0.04 secs]
Heap
par new generation total 184320K, used 58872K [0x00000000c0000000, 0x00000000cc800000, 0x00000000cc800000)
eden space 163840K, 25% used [0x00000000c0000000, 0x00000000c29958f0, 0x00000000ca000000)
from space 20480K, 79% used [0x00000000ca000000, 0x00000000cafe8808, 0x00000000cb400000)
to space 20480K, 0% used [0x00000000cb400000, 0x00000000cb400000, 0x00000000cc800000)
concurrent mark-sweep generation total 843776K, used 4109K [0x00000000cc800000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 29236K, capacity 29578K, committed 29868K, reserved 1075200K
class space used 3537K, capacity 3668K, committed 3756K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-mach-vm1.openstacklocal.log <==
2017-09-28 01:01:54,477 INFO ipc.Server (Server.java:run(1064)) - IPC Server Responder: starting
2017-09-28 01:01:54,478 INFO ipc.Server (Server.java:run(900)) - IPC Server listener on 8010: starting
2017-09-28 01:01:55,623 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:01:56,625 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:01:57,627 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:01:58,628 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:01:59,630 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:00,631 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:01,633 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:02,635 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:03,637 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:04,638 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:05,640 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 10 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:06,641 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 11 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:07,643 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 12 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:08,644 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 13 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:09,646 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 14 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:10,648 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 15 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:11,650 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 16 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:12,651 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 17 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:13,653 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 18 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:14,654 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 19 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:15,656 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 20 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:16,657 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 21 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:17,659 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 22 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:18,663 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 23 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:19,664 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 24 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:20,666 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 25 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:21,668 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 26 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:22,671 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 27 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:23,673 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 28 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:24,674 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 29 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:25,676 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 30 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:26,679 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 31 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:27,681 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 32 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:28,682 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 33 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:29,684 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 34 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:30,685 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 35 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:31,688 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 36 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2017-09-28 01:02:32,689 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: mach-vm1.openstacklocal/10.73.122.48:8020. Already tried 37 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
==> /var/log/hadoop/hdfs/SecurityAuth.audit <==
==> /var/log/hadoop/hdfs/hdfs-audit.log <==
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-mach-vm1.openstacklocal.log <==
at org.apache.hadoop.hdfs.server.common.StorageInfo.readPropertiesFile(StorageInfo.java:245)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.readProperties(NNStorage.java:641)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:334)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:210)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1046)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:704)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:688)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:752)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:992)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:976)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1701)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1769)
2017-09-28 01:02:32,451 INFO mortbay.log (Slf4jLog.java:info(67)) - Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@mach-vm1.openstacklocal:50070
2017-09-28 01:02:32,552 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(211)) - Stopping NameNode metrics system...
2017-09-28 01:02:32,553 INFO impl.MetricsSinkAdapter (MetricsSinkAdapter.java:publishMetricsFromQueue(141)) - timeline thread interrupted.
2017-09-28 01:02:32,555 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(217)) - NameNode metrics system stopped.
2017-09-28 01:02:32,555 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(606)) - NameNode metrics system shutdown complete.
2017-09-28 01:02:32,556 ERROR namenode.NameNode (NameNode.java:main(1774)) - Failed to start namenode.
java.io.FileNotFoundException: /mach/hadoop/hdfs/namenode/current/VERSION (Permission denied)
at java.io.RandomAccessFile.open0(Native Method)
at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
at org.apache.hadoop.hdfs.server.common.StorageInfo.readPropertiesFile(StorageInfo.java:245)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.readProperties(NNStorage.java:641)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:334)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:210)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1046)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:704)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:688)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:752)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:992)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:976)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1701)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1769)
2017-09-28 01:02:32,558 INFO util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1
2017-09-28 01:02:32,559 INFO timeline.HadoopTimelineMetricsSink (AbstractTimelineMetricsSink.java:getCurrentCollectorHost(278)) - No live collector to send metrics to. Metrics to be sent will be discarded. This message will be skipped for the next 20 times.
2017-09-28 01:02:32,561 INFO namenode.NameNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at mach-vm1.openstacklocal/10.73.122.48
************************************************************/
==> /var/log/hadoop/hdfs/gc.log-201709271020 <==
2017-09-27T10:20:30.694-0400: 1.787: [GC (Allocation Failure) 2017-09-27T10:20:30.695-0400: 1.787: [ParNew: 163840K->13621K(184320K), 0.0258176 secs] 163840K->13621K(1028096K), 0.0260778 secs] [Times: user=0.07 sys=0.02, real=0.02 secs]
2017-09-27T10:20:32.720-0400: 3.813: [GC (CMS Initial Mark) [1 CMS-initial-mark: 0K(843776K)] 143243K(1028096K), 0.0167330 secs] [Times: user=0.07 sys=0.00, real=0.02 secs]
2017-09-27T10:20:32.737-0400: 3.830: [CMS-concurrent-mark-start]
2017-09-27T10:20:32.745-0400: 3.838: [CMS-concurrent-mark: 0.007/0.007 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
2017-09-27T10:20:32.745-0400: 3.838: [CMS-concurrent-preclean-start]
2017-09-27T10:20:32.747-0400: 3.840: [CMS-concurrent-preclean: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2017-09-27T10:20:32.747-0400: 3.840: [CMS-concurrent-abortable-preclean-start]
CMS: abort preclean due to time 2017-09-27T10:20:37.770-0400: 8.863: [CMS-concurrent-abortable-preclean: 1.716/5.022 secs] [Times: user=1.74 sys=0.01, real=5.02 secs]
2017-09-27T10:20:37.771-0400: 8.863: [GC (CMS Final Remark) [YG occupancy: 146520 K (184320 K)]2017-09-27T10:20:37.771-0400: 8.864: [Rescan (parallel) , 0.0290282 secs]2017-09-27T10:20:37.800-0400: 8.893: [weak refs processing, 0.0000448 secs]2017-09-27T10:20:37.800-0400: 8.893: [class unloading, 0.0140837 secs]2017-09-27T10:20:37.814-0400: 8.907: [scrub symbol table, 0.0052491 secs]2017-09-27T10:20:37.819-0400: 8.912: [scrub string table, 0.0007230 secs][1 CMS-remark: 0K(843776K)] 146520K(1028096K), 0.0501840 secs] [Times: user=0.14 sys=0.00, real=0.05 secs]
2017-09-27T10:20:37.821-0400: 8.914: [CMS-concurrent-sweep-start]
2017-09-27T10:20:37.821-0400: 8.914: [CMS-concurrent-sweep: 0.000/0.000 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2017-09-27T10:20:37.821-0400: 8.914: [CMS-concurrent-reset-start]
2017-09-27T10:20:37.833-0400: 8.926: [CMS-concurrent-reset: 0.012/0.012 secs] [Times: user=0.00 sys=0.01, real=0.01 secs]
2017-09-27T10:21:40.663-0400: 71.756: [GC (Allocation Failure) 2017-09-27T10:21:40.663-0400: 71.756: [ParNew: 177461K->16199K(184320K), 0.0628747 secs] 177461K->20971K(1028096K), 0.0631465 secs] [Times: user=0.15 sys=0.01, real=0.07 secs]
2017-09-27T11:02:01.621-0400: 2492.713: [GC (Allocation Failure) 2017-09-27T11:02:01.621-0400: 2492.714: [ParNew: 180039K->6290K(184320K), 0.0262927 secs] 184811K->17403K(1028096K), 0.0267309 secs] [Times: user=0.08 sys=0.01, real=0.03 secs]
2017-09-27T11:46:30.626-0400: 5161.719: [GC (Allocation Failure) 2017-09-27T11:46:30.626-0400: 5161.719: [ParNew: 170130K->1780K(184320K), 0.0173099 secs] 181243K->12893K(1028096K), 0.0176412 secs] [Times: user=0.06 sys=0.00, real=0.01 secs]
2017-09-27T12:32:51.061-0400: 7942.154: [GC (Allocation Failure) 2017-09-27T12:32:51.061-0400: 7942.154: [ParNew: 165620K->893K(184320K), 0.0132090 secs] 176733K->12006K(1028096K), 0.0136622 secs] [Times: user=0.04 sys=0.00, real=0.02 secs]
2017-09-27T13:20:45.615-0400: 10816.708: [GC (Allocation Failure) 2017-09-27T13:20:45.616-0400: 10816.708: [ParNew: 164733K->941K(184320K), 0.0140123 secs] 175846K->12053K(1028096K), 0.0143220 secs] [Times: user=0.04 sys=0.00, real=0.01 secs]
2017-09-27T14:09:50.968-0400: 13762.060: [GC (Allocation Failure) 2017-09-27T14:09:50.968-0400: 13762.060: [ParNew: 164781K->1153K(184320K), 0.0207765 secs] 175893K->12265K(1028096K), 0.0211032 secs] [Times: user=0.05 sys=0.00, real=0.02 secs]
2017-09-27T14:58:50.883-0400: 16701.976: [GC (Allocation Failure) 2017-09-27T14:58:50.883-0400: 16701.976: [ParNew: 164993K->923K(184320K), 0.0160861 secs] 176105K->12035K(1028096K), 0.0164811 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
2017-09-27T15:47:18.407-0400: 19609.500: [GC (Allocation Failure) 2017-09-27T15:47:18.408-0400: 19609.501: [ParNew: 164763K->745K(184320K), 0.0172673 secs] 175875K->12123K(1028096K), 0.0180089 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
2017-09-27T16:36:51.048-0400: 22582.141: [GC (Allocation Failure) 2017-09-27T16:36:51.049-0400: 22582.141: [ParNew: 164585K->634K(184320K), 0.0177833 secs] 175963K->12029K(1028096K), 0.0186169 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
2017-09-27T17:26:05.613-0400: 25536.706: [GC (Allocation Failure) 2017-09-27T17:26:05.614-0400: 25536.707: [ParNew: 164474K->633K(184320K), 0.0154126 secs] 175869K->12042K(1028096K), 0.0161892 secs] [Times: user=0.05 sys=0.00, real=0.02 secs]
2017-09-27T18:15:21.020-0400: 28492.113: [GC (Allocation Failure) 2017-09-27T18:15:21.021-0400: 28492.114: [ParNew: 164473K->574K(184320K), 0.0167005 secs] 175882K->11991K(1028096K), 0.0176925 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
2017-09-27T19:05:50.914-0400: 31522.007: [GC (Allocation Failure) 2017-09-27T19:05:50.914-0400: 31522.007: [ParNew: 164414K->525K(184320K), 0.0203665 secs] 175831K->11950K(1028096K), 0.0206939 secs] [Times: user=0.07 sys=0.00, real=0.03 secs]
2017-09-27T19:54:50.881-0400: 34461.973: [GC (Allocation Failure) 2017-09-27T19:54:50.881-0400: 34461.974: [ParNew: 164365K->580K(184320K), 0.0192889 secs] 175790K->12010K(1028096K), 0.0198887 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
2017-09-27T20:44:24.576-0400: 37435.669: [GC (Allocation Failure) 2017-09-27T20:44:24.577-0400: 37435.670: [ParNew: 164420K->624K(184320K), 0.0169528 secs] 175850K->12057K(1028096K), 0.0175462 secs] [Times: user=0.06 sys=0.00, real=0.01 secs]
2017-09-27T21:34:14.619-0400: 40425.712: [GC (Allocation Failure) 2017-09-27T21:34:14.620-0400: 40425.713: [ParNew: 164464K->591K(184320K), 0.0191735 secs] 175897K->12035K(1028096K), 0.0198932 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
2017-09-27T22:23:31.727-0400: 43382.820: [GC (Allocation Failure) 2017-09-27T22:23:31.728-0400: 43382.821: [ParNew: 164431K->504K(184320K), 0.0180237 secs] 175875K->11950K(1028096K), 0.0186665 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
2017-09-27T23:13:11.024-0400: 46362.117: [GC (Allocation Failure) 2017-09-27T23:13:11.024-0400: 46362.117: [ParNew: 164344K->524K(184320K), 0.0140669 secs] 175790K->11972K(1028096K), 0.0144095 secs] [Times: user=0.05 sys=0.00, real=0.02 secs]
2017-09-28T00:02:47.810-0400: 49338.903: [GC (Allocation Failure) 2017-09-28T00:02:47.810-0400: 49338.903: [ParNew: 164364K->499K(184320K), 0.0173043 secs] 175812K->11950K(1028096K), 0.0175921 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
2017-09-28T00:51:29.070-0400: 52260.163: [GC (Allocation Failure) 2017-09-28T00:51:29.071-0400: 52260.164: [ParNew: 164339K->540K(184320K), 0.0167345 secs] 175790K->11993K(1028096K), 0.0174390 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
Heap
par new generation total 184320K, used 20748K [0x00000000c0000000, 0x00000000cc800000, 0x00000000cc800000)
eden space 163840K, 12% used [0x00000000c0000000, 0x00000000c13bbca0, 0x00000000ca000000)
from space 20480K, 2% used [0x00000000ca000000, 0x00000000ca0873a0, 0x00000000cb400000)
to space 20480K, 0% used [0x00000000cb400000, 0x00000000cb400000, 0x00000000cc800000)
concurrent mark-sweep generation total 843776K, used 11453K [0x00000000cc800000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 30835K, capacity 31196K, committed 31624K, reserved 1077248K
class space used 3460K, capacity 3607K, committed 3732K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201709270433 <==
Java HotSpot(TM) 64-Bit Server VM (25.141-b15) for linux-amd64 JRE (1.8.0_141-b15), built on Jul 12 2017 04:21:34 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 16268384k(7534916k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=4294967296 -XX:MaxHeapSize=4294967296 -XX:MaxNewSize=536870912 -XX:MaxTenuringThreshold=6 -XX:NewSize=536870912 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2017-09-27T04:33:50.305-0400: 2.224: [GC (CMS Initial Mark) [1 CMS-initial-mark: 0K(3670016K)] 372622K(4141888K), 0.1695862 secs] [Times: user=0.17 sys=0.00, real=0.17 secs]
2017-09-27T04:33:50.475-0400: 2.394: [CMS-concurrent-mark-start]
2017-09-27T04:33:50.476-0400: 2.396: [CMS-concurrent-mark: 0.002/0.002 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
2017-09-27T04:33:50.476-0400: 2.396: [CMS-concurrent-preclean-start]
Heap
par new generation total 471872K, used 406178K [0x00000006c0000000, 0x00000006e0000000, 0x00000006e0000000)
eden space 419456K, 96% used [0x00000006c0000000, 0x00000006d8ca8be0, 0x00000006d99a0000)
from space 52416K, 0% used [0x00000006d99a0000, 0x00000006d99a0000, 0x00000006dccd0000)
to space 52416K, 0% used [0x00000006dccd0000, 0x00000006dccd0000, 0x00000006e0000000)
concurrent mark-sweep generation total 3670016K, used 0K [0x00000006e0000000, 0x00000007c0000000, 0x00000007c0000000)
Metaspace used 23059K, capacity 23416K, committed 23620K, reserved 1071104K
class space used 2761K, capacity 2888K, committed 2916K, reserved 1048576K
2017-09-27T04:33:50.490-0400: 2.410: [CMS-concurrent-preclean: 0.014/0.014 secs] [Times: user=0.03 sys=0.00, real=0.01 secs]
==> /var/log/hadoop/hdfs/gc.log-201709271021 <==
Java HotSpot(TM) 64-Bit Server VM (25.141-b15) for linux-amd64 JRE (1.8.0_141-b15), built on Jul 12 2017 04:21:34 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 16268384k(11294904k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2017-09-27T10:21:06.900-0400: 1.372: [GC (Allocation Failure) 2017-09-27T10:21:06.900-0400: 1.372: [ParNew: 104960K->9778K(118016K), 0.0338512 secs] 104960K->9778K(1035520K), 0.0340141 secs] [Times: user=0.10 sys=0.03, real=0.03 secs]
2017-09-27T10:21:07.694-0400: 2.166: [GC (Allocation Failure) 2017-09-27T10:21:07.694-0400: 2.166: [ParNew: 114738K->13055K(118016K), 0.0522625 secs] 114738K->18834K(1035520K), 0.0523679 secs] [Times: user=0.13 sys=0.03, real=0.06 secs]
Heap
par new generation total 118016K, used 45604K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
eden space 104960K, 31% used [0x00000000c0000000, 0x00000000c1fc9048, 0x00000000c6680000)
from space 13056K, 99% used [0x00000000c6680000, 0x00000000c733fff8, 0x00000000c7340000)
to space 13056K, 0% used [0x00000000c7340000, 0x00000000c7340000, 0x00000000c8000000)
concurrent mark-sweep generation total 917504K, used 5778K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 22655K, capacity 22966K, committed 23212K, reserved 1069056K
class space used 2705K, capacity 2823K, committed 2892K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201709270446 <==
Java HotSpot(TM) 64-Bit Server VM (25.141-b15) for linux-amd64 JRE (1.8.0_141-b15), built on Jul 12 2017 04:21:34 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 16268384k(13378876k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=4294967296 -XX:MaxHeapSize=4294967296 -XX:MaxNewSize=536870912 -XX:MaxTenuringThreshold=6 -XX:NewSize=536870912 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
Heap
par new generation total 471872K, used 409969K [0x00000006c0000000, 0x00000006e0000000, 0x00000006e0000000)
eden space 419456K, 97% used [0x00000006c0000000, 0x00000006d905c5e8, 0x00000006d99a0000)
from space 52416K, 0% used [0x00000006d99a0000, 0x00000006d99a0000, 0x00000006dccd0000)
to space 52416K, 0% used [0x00000006dccd0000, 0x00000006dccd0000, 0x00000006e0000000)
concurrent mark-sweep generation total 3670016K, used 0K [0x00000006e0000000, 0x00000007c0000000, 0x00000007c0000000)
Metaspace used 23050K, capaci2017-09-27T04:46:34.949-0400: 6.542: [CMS-concurrent-abortable-preclean-start]
CMS: abort preclean due to time 2017-09-27T04:46:40.057-0400: 11.649: [CMS-concurrent-abortable-preclean: 1.726/5.107 secs] [Times: user=3.13 sys=0.07, real=5.11 secs]
2017-09-27T04:46:40.057-0400: 11.650: [GC (CMS Final Remark) [YG occupancy: 172176 K (184320 K)]2017-09-27T04:46:40.057-0400: 11.650: [Rescan (parallel) , 0.0233452 secs]2017-09-27T04:46:40.081-0400: 11.674: [weak refs processing, 0.0000453 secs]2017-09-27T04:46:40.081-0400: 11.674: [class unloading, 0.0071402 secs]2017-09-27T04:46:40.088-0400: 11.681: [scrub symbol table, 0.0048309 secs]2017-09-27T04:46:40.093-0400: 11.686: [scrub string table, 0.0007408 secs][1 CMS-remark: 0K(843776K)] 172176K(1028096K), 0.0369713 secs] [Times: user=0.10 sys=0.00, real=0.04 secs]
2017-09-27T04:46:40.095-0400: 11.687: [CMS-concurrent-sweep-start]
2017-09-27T04:46:40.095-0400: 11.687: [CMS-concurrent-sweep: 0.000/0.000 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2017-09-27T04:46:40.095-0400: 11.687: [CMS-concurrent-reset-start]
2017-09-27T04:46:40.101-0400: 11.693: [CMS-concurrent-reset: 0.006/0.006 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2017-09-27T04:46:50.934-0400: 22.527: [GC (Allocation Failure) 2017-09-27T04:46:50.934-0400: 22.527: [ParNew: 176358K->15995K(184320K), 0.0868449 secs] 176358K->20077K(1028096K), 0.0870878 secs] [Times: user=0.22 sys=0.01, real=0.09 secs]
Heap
par new generation total 184320K, used 176551K [0x00000000c0000000, 0x00000000cc800000, 0x00000000cc800000)
eden space 163840K, 97% used [0x00000000c0000000, 0x00000000c9ccb000, 0x00000000ca000000)
from space 20480K, 78% used [0x00000000ca000000, 0x00000000caf9ec88, 0x00000000cb400000)
to space 20480K, 0% used [0x00000000cb400000, 0x00000000cb400000, 0x00000000cc800000)
concurrent mark-sweep generation total 843776K, used 4082K [0x00000000cc800000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 30119K, capacity 30524K, committed 30868K, reserved 1077248K
class space used 3559K, capacity 3703K, committed 3732K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201709270742 <==
Java HotSpot(TM) 64-Bit Server VM (25.141-b15) for linux-amd64 JRE (1.8.0_141-b15), built on Jul 12 2017 04:21:34 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 16268384k(11312620k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=4294967296 -XX:MaxHeapSize=4294967296 -XX:MaxNewSize=536870912 -XX:MaxTenuringThreshold=6 -XX:NewSize=536870912 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2017-09-27T07:42:14.435-0400: 2.226: [GC (CMS Initial Mark) [1 CMS-initial-mark: 0K(3670016K)] 301294K(4141888K), 0.1079144 secs] [Times: user=0.11 sys=0.00, real=0.10 secs]
2017-09-27T07:42:14.543-0400: 2.334: [CMS-concurrent-mark-start]
2017-09-27T07:42:14.545-0400: 2.336: [CMS-concurrent-mark: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
2017-09-27T07:42:14.545-0400: 2.336: [CMS-concurrent-preclean-start]
2017-09-27T07:42:14.559-0400: 2.349: [CMS-concurrent-preclean: 0.014/0.014 secs] [Times: user=0.05 sys=0.00, real=0.01 secs]
2017-09-27T07:42:14.559-0400: 2.350: [CMS-concurrent-abortable-preclean-start]
Heap
par new generation total 471872K, used 372110K [0x00000006c0000000, 0x00000006e0000000, 0x00000006e0000000)
eden space 419456K, 88% used [0x00000006c0000000, 0x00000006d6b638c0, 0x00000006d99a0000)
from space 52416K, 0% used [0x00000006d99a0000, 0x00000006d99a0000, 0x00000006dccd0000)
to space 52416K, 0% used [0x00000006dccd0000, 0x00000006dccd0000, 0x00000006e0000000)
concurrent mark-sweep generation total 3670016K, used 0K [0x00000006e0000000, 0x00000007c0000000, 0x00000007c0000000)
Metaspace used 22665K, capacity 22902K, committed 23212K, reserved 1069056K
class space used 2711K, capacity 2823K, committed 2892K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-mach-vm1.openstacklocal.out.3 <==
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63413
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201709271013 <==
Java HotSpot(TM) 64-Bit Server VM (25.141-b15) for linux-amd64 JRE (1.8.0_141-b15), built on Jul 12 2017 04:21:34 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 16268384k(11357152k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2017-09-27T10:13:33.772-0400: 2.009: [GC (Allocation Failure) 2017-09-27T10:13:33.772-0400: 2.009: [ParNew: 209792K->14623K(235968K), 0.0257353 secs] 209792K->14623K(2070976K), 0.0258784 secs] [Times: user=0.07 sys=0.03, real=0.03 secs]
Heap
par new generation total 235968K, used 96136K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 38% used [0x0000000080000000, 0x0000000084f9a350, 0x000000008cce0000)
from space 26176K, 55% used [0x000000008e670000, 0x000000008f4b7d40, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 22620K, capacity 22902K, committed 23212K, reserved 1069056K
class space used 2702K, capacity 2823K, committed 2892K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201709270450 <==
Java HotSpot(TM) 64-Bit Server VM (25.141-b15) for linux-amd64 JRE (1.8.0_141-b15), built on Jul 12 2017 04:21:34 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 16268384k(13270692k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=4294967296 -XX:MaxHeapSize=4294967296 -XX:MaxNewSize=536870912 -XX:MaxTenuringThreshold=6 -XX:NewSize=536870912 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
Heap
par new generation total 471872K, used 410481K [0x00000006c0000000, 0x00000006e0000000, 0x00000006e0000000)
eden space 419456K, 97% used [0x00000006c0000000, 0x00000006d90dc578, 0x00000006d99a0000)
from space 52416K, 0% used [0x00000006d99a0000, 0x00000006d99a0000, 0x00000006dccd0000)
to space 52416K, 0% used [0x00000006dccd0000, 0x00000006dccd0000, 0x00000006e0000000)
concurrent mark-sweep generation total 3670016K, used 0K [0x00000006e0000000, 0x00000007c0000000, 0x00000007c0000000)
Metaspace used 23096K, capacity 23416K, committed 23620K, reserved 1071104K
class space used 2773K, capacity 2888K, committed 2916K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201709270933 <==
Java HotSpot(TM) 64-Bit Server VM (25.141-b15) for linux-amd64 JRE (1.8.0_141-b15), built on Jul 12 2017 04:21:34 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 16268384k(11677316k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=209715200 -XX:MaxTenuringThreshold=6 -XX:NewSize=209715200 -XX:OldPLABSize=16 -XX:ParallelGCThreads=4 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2017-09-27T09:33:56.820-0400: 1.828: [GC (Allocation Failure) 2017-09-27T09:33:56.820-0400: 1.828: [ParNew: 163840K->13625K(184320K), 0.0306343 secs] 163840K->13625K(1028096K), 0.0307554 secs] [Times: user=0.06 sys=0.03, real=0.03 secs]
2017-09-27T09:33:58.856-0400: 3.864: [GC (CMS Initial Mark) [1 CMS-initial-mark: 0K(843776K)] 144301K(1028096K), 0.0228710 secs] [Times: user=0.09 sys=0.00, real=0.02 secs]
2017-09-27T09:33:58.879-0400: 3.887: [CMS-concurrent-mark-start]
2017-09-27T09:33:58.886-0400: 3.894: [CMS-concurrent-mark: 0.007/0.007 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
2017-09-27T09:33:58.886-0400: 3.894: [CMS-concurrent-preclean-start]
2017-09-27T09:33:58.888-0400: 3.897: [CMS-concurrent-preclean: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2017-09-27T09:33:58.888-0400: 3.897: [CMS-concurrent-abortable-preclean-start]
CMS: abort preclean due to time 2017-09-27T09:34:04.002-0400: 9.010: [CMS-concurrent-abortable-preclean: 1.766/5.113 secs] [Times: user=1.78 sys=0.00, real=5.11 secs]
2017-09-27T09:34:04.002-0400: 9.010: [GC (CMS Final Remark) [YG occupancy: 147578 K (184320 K)]2017-09-27T09:34:04.002-0400: 9.011: [Rescan (parallel) , 0.0236079 secs]2017-09-27T09:34:04.026-0400: 9.034: [weak refs processing, 0.0000388 secs]2017-09-27T09:34:04.026-0400: 9.034: [class unloading, 0.0091391 secs]2017-09-27T09:34:04.035-0400: 9.043: [scrub symbol table, 0.0045365 secs]2017-09-27T09:34:04.040-0400: 9.048: [scrub string table, 0.0008380 secs][1 CMS-remark: 0K(843776K)] 147578K(1028096K), 0.0391620 secs] [Times: user=0.11 sys=0.00, real=0.04 secs]
2017-09-27T09:34:04.042-0400: 9.050: [CMS-concurrent-sweep-start]
2017-09-27T09:34:04.042-0400: 9.050: [CMS-concurrent-sweep: 0.000/0.000 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2017-09-27T09:34:04.042-0400: 9.050: [CMS-concurrent-reset-start]
2017-09-27T09:34:04.066-0400: 9.074: [CMS-concurrent-reset: 0.024/0.024 secs] [Times: user=0.01 sys=0.02, real=0.03 secs]
2017-09-27T09:35:16.782-0400: 81.790: [GC (Allocation Failure) 2017-09-27T09:35:16.782-0400: 81.791: [ParNew: 177465K->15186K(184320K), 0.0571902 secs] 177465K->19959K(1028096K), 0.0575172 secs] [Times: user=0.14 sys=0.02, real=0.06 secs]
2017-09-27T10:12:35.147-0400: 2320.155: [GC (Allocation Failure) 2017-09-27T10:12:35.147-0400: 2320.155: [ParNew: 179026K->6290K(184320K), 0.0285597 secs] 183799K->17504K(1028096K), 0.0288664 secs] [Times: user=0.08 sys=0.01, real=0.03 secs]
Heap
par new generation total 184320K, used 6969K [0x00000000c0000000, 0x00000000cc800000, 0x00000000cc800000)
eden space 163840K, 0% used [0x00000000c0000000, 0x00000000c00a9e80, 0x00000000ca000000)
from space 20480K, 30% used [0x00000000cb400000, 0x00000000cba24968, 0x00000000cc800000)
to space 20480K, 0% used [0x00000000ca000000, 0x00000000ca000000, 0x00000000cb400000)
concurrent mark-sweep generation total 843776K, used 11213K [0x00000000cc800000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 29454K, capacity 29852K, committed 30344K, reserved 1075200K
class space used 3455K, capacity 3607K, committed 3732K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-mach-vm1.openstacklocal.out.4 <==
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63413
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-mach-vm1.openstacklocal.out.2 <==
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63413
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201709270505 <==
Java HotSpot(TM) 64-Bit Server VM (25.141-b15) for linux-amd64 JRE (1.8.0_141-b15), built on Jul 12 2017 04:21:34 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 16268384k(12555364k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=4294967296 -XX:MaxHeapSize=4294967296 -XX:MaxNewSize=536870912 -XX:MaxTenuringThreshold=6 -XX:NewSize=536870912 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
Heap
par new generation total 471872K, used 372109K [0x00000006c0000000, 0x00000006e0000000, 0x00000006e0000000)
eden space 419456K, 88% used [0x00000006c0000000, 0x00000006d6b63688, 0x00000006d99a0000)
from space 52416K, 0% used [0x00000006d99a0000, 0x00000006d99a0000, 0x00000006dccd0000)
to space 52416K, 0% used [0x00000006dccd0000, 0x00000006dccd0000, 0x00000006e0000000)
concurrent mark-sweep generation total 3670016K, used 0K [0x00000006e0000000, 0x00000007c0000000, 0x00000007c0000000)
Metaspace used 22663K, capacity 22966K, committed 23212K, reserved 1069056K
class space used 2707K, capacity 2823K, committed 2892K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-mach-vm1.openstacklocal.out.3 <==
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63413
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201709270518 <==
Java HotSpot(TM) 64-Bit Server VM (25.141-b15) for linux-amd64 JRE (1.8.0_141-b15), built on Jul 12 2017 04:21:34 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 16268384k(12409512k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=4294967296 -XX:MaxHeapSize=4294967296 -XX:MaxNewSize=536870912 -XX:MaxTenuringThreshold=6 -XX:NewSize=536870912 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2017-09-27T05:18:58.883-0400: 2.227: [GC (CMS Initial Mark) [1 CMS-initial-mark: 0K(3670016K)] 301294K(4141888K), 0.1081116 secs] [Times: user=0.12 sys=0.00, real=0.11 secs]
2017-09-27T05:18:58.991-0400: 2.335: [CMS-concurrent-mark-start]
2017-09-27T05:18:58.993-0400: 2.337: [CMS-concurrent-mark: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2017-09-27T05:18:58.993-0400: 2.337: [CMS-concurrent-preclean-start]
2017-09-27T05:18:59.006-0400: 2.350: [CMS-concurrent-preclean: 0.013/0.013 secs] [Times: user=0.04 sys=0.00, real=0.02 secs]
2017-09-27T05:18:59.006-0400: 2.350: [CMS-concurrent-abortable-preclean-start]
Heap
par new generation total 471872K, used 372109K [0x00000006c0000000, 0x00000006e0000000, 0x00000006e0000000)
eden space 419456K, 88% used [0x00000006c0000000, 0x00000006d6b637e0, 0x00000006d99a0000)
from space 52416K, 0% used [0x00000006d99a0000, 0x00000006d99a0000, 0x00000006dccd0000)
to space 52416K, 0% used [0x00000006dccd0000, 0x00000006dccd0000, 0x00000006e0000000)
concurrent mark-sweep generation total 3670016K, used 0K [0x00000006e0000000, 0x00000007c0000000, 0x00000007c0000000)
Metaspace used 22629K, capacity 22902K, committed 23212K, reserved 1069056K
class space used 2703K, capacity 2823K, committed 2892K, reserved 1048576K
secs] [Times: user=0.11 sys=0.00, real=0.04 secs]
2017-09-27T05:18:58.101-0400: 8.936: [CMS-concurrent-sweep-start]
2017-09-27T05:18:58.101-0400: 8.936: [CMS-concurrent-sweep: 0.000/0.000 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2017-09-27T05:18:58.101-0400: 8.936: [CMS-concurrent-reset-start]
2017-09-27T05:18:58.117-0400: 8.952: [CMS-concurrent-reset: 0.016/0.016 secs] [Times: user=0.00 sys=0.02, real=0.02 secs]
2017-09-27T05:20:05.988-0400: 76.823: [GC (Allocation Failure) 2017-09-27T05:20:05.988-0400: 76.824: [ParNew: 177463K->16190K(184320K), 0.0763742 secs] 177463K->20994K(1028096K), 0.0768149 secs] [Times: user=0.19 sys=0.05, real=0.08 secs]
2017-09-27T05:58:46.849-0400: 2397.684: [GC (Allocation Failure) 2017-09-27T05:58:46.849-0400: 2397.684: [ParNew: 180030K->6184K(184320K), 0.0357053 secs] 184834K->17484K(1028096K), 0.0360299 secs] [Times: user=0.10 sys=0.00, real=0.04 secs]
2017-09-27T06:43:10.785-0400: 5061.620: [GC (Allocation Failure) 2017-09-27T06:43:10.785-0400: 5061.620: [ParNew: 170024K->1741K(184320K), 0.0134303 secs] 181324K->13042K(1028096K), 0.0137620 secs] [Times: user=0.05 sys=0.00, real=0.01 secs]
2017-09-27T07:27:35.794-0400: 7726.629: [GC (Allocation Failure) 2017-09-27T07:27:35.794-0400: 7726.629: [ParNew: 165581K->895K(184320K), 0.0164165 secs] 176882K->12196K(1028096K), 0.0168960 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
2017-09-27T08:15:32.971-0400: 10603.807: [GC (Allocation Failure) 2017-09-27T08:15:32.972-0400: 10603.807: [ParNew: 164735K->964K(184320K), 0.0131445 secs] 176036K->12265K(1028096K), 0.0136324 secs] [Times: user=0.04 sys=0.00, real=0.02 secs]
2017-09-27T09:03:50.949-0400: 13501.784: [GC (Allocation Failure) 2017-09-27T09:03:50.949-0400: 13501.784: [ParNew: 164804K->1090K(184320K), 0.0138953 secs] 176105K->12390K(1028096K), 0.0141712 secs] [Times: user=0.04 sys=0.00, real=0.01 secs]
Heap
par new generation total 184320K, used 105555K [0x00000000c0000000, 0x00000000cc800000, 0x00000000cc800000)
eden space 163840K, 63% used [0x00000000c0000000, 0x00000000c66046a8, 0x00000000ca000000)
from space 20480K, 5% used [0x00000000cb400000, 0x00000000cb5108d0, 0x00000000cc800000)
to space 20480K, 0% used [0x00000000ca000000, 0x00000000ca000000, 0x00000000cb400000)
concurrent mark-sweep generation total 843776K, used 11300K [0x00000000cc800000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 30383K, capacity 30754K, committed 31048K, reserved 1077248K
class space used 3459K, capacity 3608K, committed 3656K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-mach-vm1.openstacklocal.out.5 <==
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63413
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201709270730 <==
Java HotSpot(TM) 64-Bit Server VM (25.141-b15) for linux-amd64 JRE (1.8.0_141-b15), built on Jul 12 2017 04:21:34 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 16268384k(11332988k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=4294967296 -XX:MaxHeapSize=4294967296 -XX:MaxNewSize=536870912 -XX:MaxTenuringThreshold=6 -XX:NewSize=536870912 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2017-09-27T07:30:36.421-0400: 2.228: [GC (CMS Initial Mark) [1 CMS-initial-mark: 0K(3670016K)] 301294K(4141888K), 0.1047432 secs] [Times: user=0.11 sys=0.00, real=0.11 secs]
2017-09-27T07:30:36.526-0400: 2.333: [CMS-concurrent-mark-start]
2017-09-27T07:30:36.527-0400: 2.334: [CMS-concurrent-mark: 0.001/0.001 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2017-09-27T07:30:36.527-0400: 2.334: [CMS-concurrent-preclean-start]
2017-09-27T07:30:36.538-0400: 2.345: [CMS-concurrent-preclean: 0.011/0.011 secs] [Times: user=0.04 sys=0.00, real=0.01 secs]
2017-09-27T07:30:36.538-0400: 2.345: [CMS-concurrent-abortable-preclean-start]
Heap
par new generation total 471872K, used 372109K [0x00000006c0000000, 0x00000006e0000000, 0x00000006e0000000)
eden space 419456K, 88% used [0x00000006c0000000, 0x00000006d6b63790, 0x00000006d99a0000)
from space 52416K, 0% used [0x00000006d99a0000, 0x00000006d99a0000, 0x00000006dccd0000)
to space 52416K, 0% used [0x00000006dccd0000, 0x00000006dccd0000, 0x00000006e0000000)
concurrent mark-sweep generation total 3670016K, used 0K [0x00000006e0000000, 0x00000007c0000000, 0x00000007c0000000)
Metaspace used 22642K, capacity 22902K, committed 23212K, reserved 1069056K
class space used 2701K, capacity 2823K, committed 2892K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-mach-vm1.openstacklocal.out.1 <==
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63413
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-mach-vm1.openstacklocal.out <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals (-i) 63413
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201709280101 <==
Java HotSpot(TM) 64-Bit Server VM (25.141-b15) for linux-amd64 JRE (1.8.0_141-b15), built on Jul 12 2017 04:21:34 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 16268384k(13494480k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2017-09-28T01:01:54.649-0400: 1.424: [GC (Allocation Failure) 2017-09-28T01:01:54.649-0400: 1.424: [ParNew: 104960K->9621K(118016K), 0.0261472 secs] 104960K->9621K(1035520K), 0.0262585 secs] [Times: user=0.05 sys=0.03, real=0.03 secs]
2017-09-28T01:01:56.493-0400: 3.267: [GC (Allocation Failure) 2017-09-28T01:01:56.493-0400: 3.267: [ParNew: 114581K->6843K(118016K), 0.0547873 secs] 114581K->9881K(1035520K), 0.0549042 secs] [Times: user=0.14 sys=0.01, real=0.05 secs]
Heap
par new generation total 118016K, used 66522K [0x00000000c0000000, 02017-09-28T01:01:54.132-0400: 6.616: [CMS-concurrent-abortable-preclean-start]
CMS: abort preclean due to time 2017-09-28T01:01:59.277-0400: 11.761: [CMS-concurrent-abortable-preclean: 1.741/5.146 secs] [Times: user=3.15 sys=0.08, real=5.14 secs]
2017-09-28T01:01:59.278-0400: 11.762: [GC (CMS Final Remark) [YG occupancy: 172108 K (184320 K)]2017-09-28T01:01:59.278-0400: 11.762: [Rescan (parallel) , 0.0341087 secs]2017-09-28T01:01:59.313-0400: 11.796: [weak refs processing, 0.0000656 secs]2017-09-28T01:01:59.313-0400: 11.796: [class unloading, 0.0119969 secs]2017-09-28T01:01:59.325-0400: 11.808: [scrub symbol table, 0.0057996 secs]2017-09-28T01:01:59.331-0400: 11.814: [scrub string table, 0.0007292 secs][1 CMS-remark: 0K(843776K)] 172108K(1028096K), 0.0536026 secs] [Times: user=0.15 sys=0.00, real=0.06 secs]
2017-09-28T01:01:59.332-0400: 11.816: [CMS-concurrent-sweep-start]
2017-09-28T01:01:59.332-0400: 11.816: [CMS-concurrent-sweep: 0.000/0.000 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2017-09-28T01:01:59.332-0400: 11.816: [CMS-concurrent-reset-start]
2017-09-28T01:01:59.347-0400: 11.831: [CMS-concurrent-reset: 0.015/0.015 secs] [Times: user=0.00 sys=0.01, real=0.01 secs]
2017-09-28T01:02:02.179-0400: 14.662: [GC (Allocation Failure) 2017-09-28T01:02:02.179-0400: 14.662: [ParNew: 176359K->15072K(184320K), 0.0824440 secs] 176359K->19152K(1028096K), 0.0826446 secs] [Times: user=0.15 sys=0.02, real=0.09 secs]
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-mach-vm1.openstacklocal.out <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals (-i) 63413
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-mach-vm1.openstacklocal.out.5 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals (-i) 63413
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201709270619 <==
Java HotSpot(TM) 64-Bit Server VM (25.141-b15) for linux-amd64 JRE (1.8.0_141-b15), built on Jul 12 2017 04:21:34 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 16268384k(11915392k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=4294967296 -XX:MaxHeapSize=4294967296 -XX:MaxNewSize=536870912 -XX:MaxTenuringThreshold=6 -XX:NewSize=536870912 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2017-09-27T06:19:35.238-0400: 2.222: [GC (CMS Initial Mark) [1 CMS-initial-mark: 0K(3670016K)] 292905K(4141888K), 0.1244366 secs] [Times: user=0.13 sys=0.00, real=0.12 secs]
2017-09-27T06:19:35.363-0400: 2.347: [CMS-concurrent-mark-start]
2017-09-27T06:19:35.365-0400: 2.348: [CMS-concurrent-mark: 0.002/0.002 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
2017-09-27T06:19:35.365-0400: 2.348: [CMS-concurrent-preclean-start]
2017-09-27T06:19:35.378-0400: 2.362: [CMS-concurrent-preclean: 0.014/0.014 secs] [Times: user=0.05 sys=0.00, real=0.01 secs]
2017-09-27T06:19:35.378-0400: 2.362: [CMS-concurrent-abortable-preclean-start]
Heap
par new generation total 471872K, used 372110K [0x00000006c0000000, 0x00000006e0000000, 0x00000006e0000000)
eden space 419456K, 88% used [0x00000006c0000000, 0x00000006d6b63820, 0x00000006d99a0000)
from space 52416K, 0% used [0x00000006d99a0000, 0x00000006d99a0000, 0x00000006dccd0000)
to space 52416K, 0% used [0x00000006dccd0000, 0x00000006dccd0000, 0x00000006e0000000)
concurrent mark-sweep generation total 3670016K, used 0K [0x00000006e0000000, 0x00000007c0000000, 0x00000007c0000000)
Metaspace used 23254K, capacity 23552K, committed 23620K, reserved 1071104K
class space used 2772K, capacity 2901K, committed 2944K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201709270950 <==
Java HotSpot(TM) 64-Bit Server VM (25.141-b15) for linux-amd64 JRE (1.8.0_141-b15), built on Jul 12 2017 04:21:34 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 16268384k(11273128k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=4294967296 -XX:MaxHeapSize=4294967296 -XX:MaxNewSize=536870912 -XX:MaxTenuringThreshold=6 -XX:NewSize=536870912 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2017-09-27T09:50:32.393-0400: 2.230: [GC (CMS Initial Mark) [1 CMS-initial-mark: 0K(3670016K)] 301296K(4141888K), 0.1162730 secs] [Times: user=0.12 sys=0.00, real=0.12 secs]
2017-09-27T09:50:32.509-0400: 2.346: [CMS-concurrent-mark-start]
2017-09-27T09:50:32.510-0400: 2.347: [CMS-concurrent-mark: 0.001/0.001 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2017-09-27T09:50:32.510-0400: 2.348: [CMS-concurrent-preclean-start]
2017-09-27T09:50:32.518-0400: 2.355: [CMS-concurrent-preclean: 0.007/0.007 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
2017-09-27T09:50:32.518-0400: 2.355: [CMS-concurrent-abortable-preclean-start]
Heap
par new generation total 471872K, used 372112K [0x00000006c0000000, 0x00000006e0000000, 0x00000006e0000000)
eden space 419456K, 88% used [0x00000006c0000000, 0x00000006d6b640c8, 0x00000006d99a0000)
from space 52416K, 0% used [0x00000006d99a0000, 0x00000006d99a0000, 0x00000006dccd0000)
to space 52416K, 0% used [0x00000006dccd0000, 0x00000006dccd0000, 0x00000006e0000000)
concurrent mark-sweep generation total 3670016K, used 0K [0x00000006e0000000, 0x00000007c0000000, 0x00000007c0000000)
Metaspace used 22656K, capacity 22902K, committed 23212K, reserved 1069056K
class space used 2709K, capacity 2823K, committed 2892K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-mach-vm1.openstacklocal.out.2 <==
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63413
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201709280102 <==
Java HotSpot(TM) 64-Bit Server VM (25.141-b15) for linux-amd64 JRE (1.8.0_141-b15), built on Jul 12 2017 04:21:34 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 16268384k(13411488k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2017-09-28T01:02:30.160-0400: 1.359: [GC (Allocation Failure) 2017-09-28T01:02:30.160-0400: 1.359: [ParNew: 104960K->9637K(118016K), 0.0338023 secs] 104960K->9637K(1035520K), 0.0340231 secs] [Times: user=0.09 sys=0.03, real=0.04 secs]
2017-09-28T01:02:32.011-0400: 3.210: [GC (Allocation Failure) 2017-09-28T01:02:32.011-0400: 3.210: [ParNew: 114597K->6921K(118016K), 0.0492833 secs] 114597K->9959K(1035520K), 0.0493843 secs] [Times: user=0.12 sys=0.02, real=0.04 secs]
Heap
par new generation total 118016K, used 64442K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
eden space 104960K, 54% used [0x00000000c0000000, 0x00000000c382c528, 0x00000000c6680000)
from space 13056K, 53% used [0x00000000c6680000, 0x00000000c6d426b0, 0x00000000c7340000)
to space 13056K, 0% used [0x00000000c7340000, 0x00000000c7340000, 0x00000000c8000000)
concurrent mark-sweep generation total 917504K, used 3037K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 23097K, capacity 23416K, committed 23620K, reserved 1071104K
class space used 2775K, capacity 2888K, committed 2916K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201709270625 <==
Java HotSpot(TM) 64-Bit Server VM (25.141-b15) for linux-amd64 JRE (1.8.0_141-b15), built on Jul 12 2017 04:21:34 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 16268384k(11908864k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=4294967296 -XX:MaxHeapSize=4294967296 -XX:MaxNewSize=536870912 -XX:MaxTenuringThreshold=6 -XX:NewSize=536870912 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2017-09-27T06:25:47.793-0400: 2.225: [GC (CMS Initial Mark) [1 CMS-initial-mark: 0K(3670016K)] 292905K(4141888K), 0.1266096 secs] [Times: user=0.14 sys=0.00, real=0.13 secs]
2017-09-27T06:25:47.920-0400: 2.352: [CMS-concurrent-mark-start]
2017-09-27T06:25:47.922-0400: 2.354: [CMS-concurrent-mark: 0.002/0.002 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2017-09-27T06:25:47.922-0400: 2.354: [CMS-concurrent-preclean-start]
2017-09-27T06:25:47.933-0400: 2.365: [CMS-concurrent-preclean: 0.011/0.011 secs] [Times: user=0.03 sys=0.00, real=0.01 secs]
2017-09-27T06:25:47.933-0400: 2.365: [CMS-concurrent-abortable-preclean-start]
Heap
par new generation total 471872K, used 372109K [0x00000006c0000000, 0x00000006e0000000, 0x00000006e0000000)
eden space 419456K, 88% used [0x00000006c0000000, 0x00000006d6b63628, 0x00000006d99a0000)
from space 52416K, 0% used [0x00000006d99a0000, 0x00000006d99a0000, 0x00000006dccd0000)
to space 52416K, 0% used [0x00000006dccd0000, 0x00000006dccd0000, 0x00000006e0000000)
concurrent mark-sweep generation total 3670016K, used 0K [0x00000006e0000000, 0x00000007c0000000, 0x00000007c0000000)
Metaspace used 22661K, capacity 22902K, committed 23212K, reserved 1069056K
class space used 2711K, capacity 2823K, committed 2892K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201709270930 <==
Java HotSpot(TM) 64-Bit Server VM (25.141-b15) for linux-amd64 JRE (1.8.0_141-b15), built on Jul 12 2017 04:21:34 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 16268384k(11224324k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=4294967296 -XX:MaxHeapSize=4294967296 -XX:MaxNewSize=536870912 -XX:MaxTenuringThreshold=6 -XX:NewSize=536870912 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
Heap
par new generation total 471872K, used 372110K [0x00000006c0000000, 0x00000006e0000000, 0x00000006e0000000)
eden space 419456K, 88% used [0x00000006c0000000, 0x00000006d6b63a50, 0x00000006d99a0000)
from space 52416K, 0% used [0x00000006d99a0000, 0x00000006d99a0000, 0x00000006dccd0000)
to space 52416K, 0% used [0x00000006dccd0000, 0x00000006dccd0000, 0x00000006e0000000)
concurrent mark-sweep generation total 3670016K, used 0K [0x00000006e0000000, 0x00000007c0000000, 0x00000007c0000000)
Metaspace used 22636K, capacity 22966K, committed 23212K, reserved 1069056K
class space used 2706K, capacity 2823K, committed 2892K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201709271012 <==
Java HotSpot(TM) 64-Bit Server VM (25.141-b15) for linux-amd64 JRE (1.8.0_141-b15), built on Jul 12 2017 04:21:34 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 16268384k(11626272k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=209715200 -XX:MaxTenuringThreshold=6 -XX:NewSize=209715200 -XX:OldPLABSize=16 -XX:ParallelGCThreads=4 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2017-09-27T10:12:48.993-0400: 1.851: [GC (Allocation Failure) 2017-09-27T10:12:48.993-0400: 1.851: [ParNew: 163840K->13624K(184320K), 0.0188793 secs] 163840K->13624K(1028096K), 0.0190444 secs] [Times: user=0.04 sys=0.02, real=0.02 secs]
2017-09-27T10:12:51.014-0400: 3.872: [GC (CMS Initial Mark) [1 CMS-initial-mark: 0K(843776K)] 152270K(1028096K), 0.0226766 secs] [Times: user=0.08 sys=0.00, real=0.02 secs]
2017-09-27T10:12:51.037-0400: 3.895: [CMS-concurrent-mark-start]
2017-09-27T10:12:51.045-0400: 3.903: [CMS-concurrent-mark: 0.008/0.008 secs] [Times: user=0.04 sys=0.00, real=0.01 secs]
2017-09-27T10:12:51.045-0400: 3.903: [CMS-concurrent-preclean-start]
2017-09-27T10:12:51.051-0400: 3.909: [CMS-concurrent-preclean: 0.006/0.006 secs] [Times: user=0.03 sys=0.00, real=0.00 secs]
2017-09-27T10:12:51.051-0400: 3.909: [CMS-concurrent-abortable-preclean-start]
CMS: abort preclean due to time 2017-09-27T10:12:56.161-0400: 9.019: [CMS-concurrent-abortable-preclean: 1.790/5.110 secs] [Times: user=2.22 sys=0.02, real=5.11 secs]
2017-09-27T10:12:56.163-0400: 9.020: [GC (CMS Final Remark) [YG occupancy: 165387 K (184320 K)]2017-09-27T10:12:56.163-0400: 9.020: [Rescan (parallel) , 0.0319399 secs]2017-09-27T10:12:56.195-0400: 9.052: [weak refs processing, 0.0000433 secs]2017-09-27T10:12:56.195-0400: 9.052: [class unloading, 0.0118175 secs]2017-09-27T10:12:56.207-0400: 9.064: [scrub symbol table, 0.0063686 secs]2017-09-27T10:12:56.213-0400: 9.071: [scrub string table, 0.0009413 secs][1 CMS-remark: 0K(843776K)] 165387K(1028096K), 0.0519178 secs] [Times: user=0.14 sys=0.00, real=0.06 secs]
2017-09-27T10:12:56.215-0400: 9.073: [CMS-concurrent-sweep-start]
2017-09-27T10:12:56.215-0400: 9.073: [CMS-concurrent-sweep: 0.000/0.000 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2017-09-27T10:12:56.215-0400: 9.073: [CMS-concurrent-reset-start]
2017-09-27T10:12:56.230-0400: 9.088: [CMS-concurrent-reset: 0.015/0.015 secs] [Times: user=0.00 sys=0.01, real=0.01 secs]
2017-09-27T10:14:08.956-0400: 81.814: [GC (Allocation Failure) 2017-09-27T10:14:08.956-0400: 81.814: [ParNew: 177464K->16194K(184320K), 0.0614929 secs] 177464K->20998K(1028096K), 0.0617351 secs] [Times: user=0.14 sys=0.03, real=0.06 secs]
Heap
par new generation total 184320K, used 58664K [0x00000000c0000000, 0x00000000cc800000, 0x00000000cc800000)
eden space 163840K, 25% used [0x00000000c0000000, 0x00000000c2979788, 0x00000000ca000000)
from space 20480K, 79% used [0x00000000ca000000, 0x00000000cafd0a80, 0x00000000cb400000)
to space 20480K, 0% used [0x00000000cb400000, 0x00000000cb400000, 0x00000000cc800000)
concurrent mark-sweep generation total 843776K, used 4803K [0x00000000cc800000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 28892K, capacity 29270K, committed 29496K, reserved 1075200K
class space used 3454K, capacity 3606K, committed 3708K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-mach-vm1.openstacklocal.out.1 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals (-i) 63413
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201709270635 <==
Java HotSpot(TM) 64-Bit Server VM (25.141-b15) for linux-amd64 JRE (1.8.0_141-b15), built on Jul 12 2017 04:21:34 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 16268384k(11369964k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=4294967296 -XX:MaxHeapSize=4294967296 -XX:MaxNewSize=536870912 -XX:MaxTenuringThreshold=6 -XX:NewSize=536870912 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2017-09-27T06:35:58.348-0400: 2.225: [GC (CMS Initial Mark) [1 CMS-initial-mark: 0K(3670016K)] 292905K(4141888K), 0.1358926 secs] [Times: user=0.14 sys=0.00, real=0.13 secs]
2017-09-27T06:35:58.484-0400: 2.361: [CMS-concurrent-mark-start]
2017-09-27T06:35:58.486-0400: 2.363: [CMS-concurrent-mark: 0.002/0.002 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
2017-09-27T06:35:58.486-0400: 2.363: [CMS-concurrent-preclean-start]
2017-09-27T06:35:58.500-0400: 2.377: [CMS-concurrent-preclean: 0.014/0.014 secs] [Times: user=0.07 sys=0.00, real=0.01 secs]
2017-09-27T06:35:58.500-0400: 2.377: [CMS-concurrent-abortable-preclean-start]
Heap
par new generation total 471872K, used 372109K [0x00000006c0000000, 0x00000006e0000000, 0x00000006e0000000)
eden space 419456K, 88% used [0x00000006c0000000, 0x00000006d6b63758, 0x00000006d99a0000)
from space 52416K, 0% used [0x00000006d99a0000, 0x00000006d99a0000, 0x00000006dccd0000)
to space 52416K, 0% used [0x00000006dccd0000, 0x00000006dccd0000, 0x00000006e0000000)
concurrent mark-sweep generation total 3670016K, used 0K [0x00000006e0000000, 0x00000007c0000000, 0x00000007c0000000)
Metaspace used 22629K, capacity 22902K, committed 23212K, reserved 1069056K
class space used 2707K, capacity 2823K, committed 2892K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201709271015 <==
Java HotSpot(TM) 64-Bit Server VM (25.141-b15) for linux-amd64 JRE (1.8.0_141-b15), built on Jul 12 2017 04:21:34 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 16268384k(11322112k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2017-09-27T10:15:25.761-0400: 2.082: [GC (Allocation Failure) 2017-09-27T10:15:25.761-0400: 2.082: [ParNew: 209792K->14614K(235968K), 0.0332595 secs] 209792K->14614K(2070976K), 0.0333843 secs] [Times: user=0.09 sys=0.05, real=0.03 secs]
Heap
par new generation total 235968K, used 91931K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
eden space 209792K, 36% used [0x0000000080000000, 0x0000000084b81268, 0x000000008cce0000)
from space 26176K, 55% used [0x000000008e670000, 0x000000008f4b5b08, 0x0000000090000000)
to space 26176K, 0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 22644K, capacity 22966K, committed 23212K, reserved 1069056K
class space used 2701K, capacity 2823K, committed 2892K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201709270931 <==
Java HotSpot(TM) 64-Bit Server VM (25.141-b15) for linux-amd64 JRE (1.8.0_141-b15), built on Jul 12 2017 04:21:34 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 16268384k(11218620k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=4294967296 -XX:MaxHeapSize=4294967296 -XX:MaxNewSize=536870912 -XX:MaxTenuringThreshold=6 -XX:NewSize=536870912 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
Heap
par new generation total 471872K, used 372119K [0x00000006c0000000, 0x00000006e0000000, 0x00000006e0000000)
eden space 419456K, 88% used [0x00000006c0000000, 0x00000006d6b65fa8, 0x00000006d99a0000)
from space 52416K, 0% used [0x00000006d99a0000, 0x00000006d99a0000, 0x00000006dccd0000)
to space 52416K, 0% used [0x00000006dccd0000, 0x00000006dccd0000, 0x00000006e0000000)
concurrent mark-sweep generation total 3670016K, used 0K [0x00000006e0000000, 0x00000007c0000000, 0x00000007c0000000)
Metaspace used 22645K, capacity 22966K, committed 23212K, reserved 1069056K
class space used 2703K, capacity 2823K, committed 2892K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201709270934 <==
Java HotSpot(TM) 64-Bit Server VM (25.141-b15) for linux-amd64 JRE (1.8.0_141-b15), built on Jul 12 2017 04:21:34 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 16268384k(11416632k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=4294967296 -XX:MaxHeapSize=4294967296 -XX:MaxNewSize=536870912 -XX:MaxTenuringThreshold=6 -XX:NewSize=536870912 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2017-09-27T09:34:05.893-0400: 2.244: [GC (CMS Initial Mark) [1 CMS-initial-mark: 0K(3670016K)] 301295K(4141888K), 0.1286330 secs] [Times: user=0.13 sys=0.00, real=0.13 secs]
2017-09-27T09:34:06.022-0400: 2.373: [CMS-concurrent-mark-start]
2017-09-27T09:34:06.024-0400: 2.375: [CMS-concurrent-mark: 0.002/0.002 secs] [Times: user=0.02 sys=0.00, real=0.00 secs]
2017-09-27T09:34:06.024-0400: 2.375: [CMS-concurrent-preclean-start]
2017-09-27T09:34:06.035-0400: 2.386: [CMS-concurrent-preclean: 0.011/0.011 secs] [Times: user=0.05 sys=0.00, real=0.02 secs]
2017-09-27T09:34:06.035-0400: 2.386: [CMS-concurrent-abortable-preclean-start]
Heap
par new generation total 471872K, used 380501K [0x00000006c0000000, 0x00000006e0000000, 0x00000006e0000000)
eden space 419456K, 90% used [0x00000006c0000000, 0x00000006d7395678, 0x00000006d99a0000)
from space 52416K, 0% used [0x00000006d99a0000, 0x00000006d99a0000, 0x00000006dccd0000)
to space 52416K, 0% used [0x00000006dccd0000, 0x00000006dccd0000, 0x00000006e0000000)
concurrent mark-sweep generation total 3670016K, used 0K [0x00000006e0000000, 0x00000007c0000000, 0x00000007c0000000)
Metaspace used 23253K, capacity 23617K, committed 23876K, reserved 1071104K
class space used 2770K, capacity 2902K, committed 3020K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-mach-vm1.openstacklocal.out.4 <==
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63413
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Command failed after 1 tries
Regards, K
... View more
Labels:
- Labels:
-
Apache Hadoop