Support Questions
Find answers, ask questions, and share your expertise

Unable to start additional Namenode after enabling Namenode HA - Final step - Finalize HA Setup

Explorer

enable-ha.jpg

stderr: /var/lib/ambari-agent/data/errors-1206.txt

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 348, in <module>
    NameNode().execute()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 375, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 90, in start
    upgrade_suspended=params.upgrade_suspended, env=env)
  File "/usr/lib/ambari-agent/lib/ambari_commons/os_family_impl.py", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 175, in namenode
    create_log_dir=True
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py", line 276, in service
    Execute(daemon_cmd, not_if=process_id_exists_command, environment=hadoop_env_exports)
  File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
    self.env.run()
  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 262, in action_run
    tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
    result = function(command, **kwargs)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
    tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 303, in _call
    raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/2.6.5.0-292/hadoop/sbin/hadoop-daemon.sh --config /usr/hdp/2.6.5.0-292/hadoop/conf start namenode'' returned 1. su: warning: cannot change directory to /home/hdfs: No such file or directory
starting namenode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-namenode-omiprihdp03ap.mufep.net.out
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.

stdout:

/var/lib/ambari-agent/data/output-1206.txt

2018-10-29 14:06:40,705 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.0-292 -> 2.6.5.0-292
2018-10-29 14:06:40,718 - Using hadoop conf dir: /usr/hdp/2.6.5.0-292/hadoop/conf
2018-10-29 14:06:40,882 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.0-292 -> 2.6.5.0-292
2018-10-29 14:06:40,887 - Using hadoop conf dir: /usr/hdp/2.6.5.0-292/hadoop/conf
2018-10-29 14:06:40,888 - Group['kms'] {}
2018-10-29 14:06:40,889 - Group['livy'] {}
2018-10-29 14:06:40,889 - Group['spark'] {}
2018-10-29 14:06:40,889 - Group['ranger'] {}
2018-10-29 14:06:40,889 - Group['hdfs'] {}
2018-10-29 14:06:40,889 - Group['zeppelin'] {}
2018-10-29 14:06:40,890 - Group['hadoop'] {}
2018-10-29 14:06:40,890 - Group['users'] {}
2018-10-29 14:06:40,890 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 14:06:40,891 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 14:06:40,892 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 14:06:40,893 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 14:06:40,894 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 14:06:40,895 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-10-29 14:06:40,895 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 14:06:40,896 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-10-29 14:06:40,897 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'ranger'], 'uid': None}
2018-10-29 14:06:40,898 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-10-29 14:06:40,899 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'zeppelin', u'hadoop'], 'uid': None}
2018-10-29 14:06:40,900 - User['kms'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 14:06:40,900 - User['accumulo'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 14:06:40,901 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 14:06:40,902 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 14:06:40,903 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-10-29 14:06:40,904 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 14:06:40,905 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 14:06:40,905 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None}
2018-10-29 14:06:40,906 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 14:06:40,907 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 14:06:40,908 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 14:06:40,909 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 14:06:40,910 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-10-29 14:06:40,910 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-10-29 14:06:40,912 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2018-10-29 14:06:40,918 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2018-10-29 14:06:40,918 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2018-10-29 14:06:40,919 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-10-29 14:06:40,921 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-10-29 14:06:40,921 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2018-10-29 14:06:40,931 - call returned (0, '1030')
2018-10-29 14:06:40,932 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1030'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2018-10-29 14:06:40,939 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1030'] due to not_if
2018-10-29 14:06:40,939 - Group['hdfs'] {}
2018-10-29 14:06:40,939 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', u'hdfs']}
2018-10-29 14:06:40,940 - FS Type: 
2018-10-29 14:06:40,940 - Directory['/etc/hadoop'] {'mode': 0755}
2018-10-29 14:06:40,953 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2018-10-29 14:06:40,953 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2018-10-29 14:06:40,972 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2018-10-29 14:06:40,981 - Skipping Execute[('setenforce', '0')] due to not_if
2018-10-29 14:06:40,981 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2018-10-29 14:06:40,984 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2018-10-29 14:06:40,984 - Changing owner for /var/run/hadoop from 1020 to root
2018-10-29 14:06:40,984 - Changing group for /var/run/hadoop from 1000 to root
2018-10-29 14:06:40,984 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2018-10-29 14:06:40,988 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2018-10-29 14:06:40,989 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2018-10-29 14:06:40,994 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2018-10-29 14:06:41,002 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2018-10-29 14:06:41,002 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2018-10-29 14:06:41,003 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2018-10-29 14:06:41,006 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2018-10-29 14:06:41,010 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2018-10-29 14:06:41,288 - Using hadoop conf dir: /usr/hdp/2.6.5.0-292/hadoop/conf
2018-10-29 14:06:41,289 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.0-292 -> 2.6.5.0-292
2018-10-29 14:06:41,306 - Using hadoop conf dir: /usr/hdp/2.6.5.0-292/hadoop/conf
2018-10-29 14:06:41,318 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2018-10-29 14:06:41,322 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2018-10-29 14:06:41,323 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.5.0-292/hadoop/conf', 'configuration_attributes': {}, 'configurations': ...}
2018-10-29 14:06:41,329 - Generating config: /usr/hdp/2.6.5.0-292/hadoop/conf/hadoop-policy.xml
2018-10-29 14:06:41,329 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2018-10-29 14:06:41,336 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.5.0-292/hadoop/conf', 'configuration_attributes': {}, 'configurations': ...}
2018-10-29 14:06:41,341 - Generating config: /usr/hdp/2.6.5.0-292/hadoop/conf/ssl-client.xml
2018-10-29 14:06:41,342 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2018-10-29 14:06:41,346 - Directory['/usr/hdp/2.6.5.0-292/hadoop/conf/secure'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2018-10-29 14:06:41,346 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.5.0-292/hadoop/conf/secure', 'configuration_attributes': {}, 'configurations': ...}
2018-10-29 14:06:41,352 - Generating config: /usr/hdp/2.6.5.0-292/hadoop/conf/secure/ssl-client.xml
2018-10-29 14:06:41,352 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2018-10-29 14:06:41,357 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.5.0-292/hadoop/conf', 'configuration_attributes': {}, 'configurations': ...}
2018-10-29 14:06:41,362 - Generating config: /usr/hdp/2.6.5.0-292/hadoop/conf/ssl-server.xml
2018-10-29 14:06:41,363 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2018-10-29 14:06:41,367 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.5.0-292/hadoop/conf', 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true'}}, 'configurations': ...}
2018-10-29 14:06:41,373 - Generating config: /usr/hdp/2.6.5.0-292/hadoop/conf/hdfs-site.xml
2018-10-29 14:06:41,373 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2018-10-29 14:06:41,411 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.5.0-292/hadoop/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'hdfs', 'configurations': ...}
2018-10-29 14:06:41,416 - Generating config: /usr/hdp/2.6.5.0-292/hadoop/conf/core-site.xml
2018-10-29 14:06:41,416 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2018-10-29 14:06:41,439 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'hdfs'}
2018-10-29 14:06:41,439 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.0-292 -> 2.6.5.0-292
2018-10-29 14:06:41,444 - Directory['/grid/0/hadoop/hdfs/namenode'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2018-10-29 14:06:41,444 - Skipping setting up secure ZNode ACL for HFDS as it's supported only for secure clusters.
2018-10-29 14:06:41,447 - Called service start with upgrade_type: None
2018-10-29 14:06:41,447 - Ranger Hdfs plugin is not enabled
2018-10-29 14:06:41,448 - File['/etc/hadoop/conf/dfs.exclude'] {'owner': 'hdfs', 'content': Template('exclude_hosts_list.j2'), 'group': 'hadoop'}
2018-10-29 14:06:41,449 - Options for start command are: 
2018-10-29 14:06:41,449 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755}
2018-10-29 14:06:41,449 - Changing owner for /var/run/hadoop from 0 to hdfs
2018-10-29 14:06:41,449 - Changing group for /var/run/hadoop from 0 to hadoop
2018-10-29 14:06:41,449 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2018-10-29 14:06:41,450 - Directory['/var/log/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2018-10-29 14:06:41,450 - File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'] {'action': ['delete'], 'not_if': 'ambari-sudo.sh  -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh  -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'}
2018-10-29 14:06:41,472 - Deleting File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid']
2018-10-29 14:06:41,473 - Execute['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/2.6.5.0-292/hadoop/sbin/hadoop-daemon.sh --config /usr/hdp/2.6.5.0-292/hadoop/conf start namenode''] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/2.6.5.0-292/hadoop/libexec'}, 'not_if': 'ambari-sudo.sh  -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh  -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'}
2018-10-29 14:06:45,641 - Execute['find /var/log/hadoop/hdfs -maxdepth 1 -type f -name '*' -exec echo '==> {} <==' \; -exec tail -n 40 {} \;'] {'logoutput': True, 'ignore_failures': True, 'user': 'hdfs'}
su: warning: cannot change directory to /home/hdfs: No such file or directory
==> /var/log/hadoop/hdfs/gc.log-201810240133 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct  9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(183554216k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-10-24T01:33:05.852+0200: 1.362: [GC (Allocation Failure) 2018-10-24T01:33:05.852+0200: 1.362: [ParNew: 209792K->24212K(235968K), 0.0630501 secs] 209792K->40598K(2070976K), 0.0631829 secs] [Times: user=0.44 sys=0.02, real=0.07 secs] 
2018-10-24T01:34:07.896+0200: 63.406: [GC (CMS Initial Mark) [1 CMS-initial-mark: 16386K(1835008K)] 120652K(2070976K), 0.0079488 secs] [Times: user=0.04 sys=0.00, real=0.01 secs] 
2018-10-24T01:34:07.904+0200: 63.414: [CMS-concurrent-mark-start]
2018-10-24T01:34:07.908+0200: 63.418: [CMS-concurrent-mark: 0.004/0.004 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] 
2018-10-24T01:34:07.908+0200: 63.418: [CMS-concurrent-preclean-start]
2018-10-24T01:34:07.914+0200: 63.424: [CMS-concurrent-preclean: 0.006/0.006 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] 
2018-10-24T01:34:07.914+0200: 63.424: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2018-10-24T01:34:13.010+0200: 68.520: [CMS-concurrent-abortable-preclean: 1.216/5.096 secs] [Times: user=1.22 sys=0.00, real=5.09 secs] 
2018-10-24T01:34:13.010+0200: 68.520: [GC (CMS Final Remark) [YG occupancy: 104965 K (235968 K)]2018-10-24T01:34:13.010+0200: 68.520: [Rescan (parallel) , 0.0072251 secs]2018-10-24T01:34:13.017+0200: 68.527: [weak refs processing, 0.0000252 secs]2018-10-24T01:34:13.017+0200: 68.527: [class unloading, 0.0036285 secs]2018-10-24T01:34:13.021+0200: 68.531: [scrub symbol table, 0.0054746 secs]2018-10-24T01:34:13.026+0200: 68.536: [scrub string table, 0.0003966 secs][1 CMS-remark: 16386K(1835008K)] 121351K(2070976K), 0.0174924 secs] [Times: user=0.06 sys=0.00, real=0.02 secs] 
2018-10-24T01:34:13.028+0200: 68.538: [CMS-concurrent-sweep-start]
2018-10-24T01:34:13.028+0200: 68.539: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
2018-10-24T01:34:13.028+0200: 68.539: [CMS-concurrent-reset-start]
2018-10-24T01:34:13.037+0200: 68.548: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00 sys=0.01, real=0.01 secs] 
2018-10-24T03:01:07.137+0200: 5282.647: [GC (Allocation Failure) 2018-10-24T03:01:07.137+0200: 5282.647: [ParNew: 234004K->7440K(235968K), 0.0530361 secs] 250390K->39189K(2070976K), 0.0531219 secs] [Times: user=0.31 sys=0.02, real=0.05 secs] 
2018-10-24T06:12:08.520+0200: 16744.030: [GC (Allocation Failure) 2018-10-24T06:12:08.520+0200: 16744.030: [ParNew: 217232K->2331K(235968K), 0.0068738 secs] 248981K->34081K(2070976K), 0.0069606 secs] [Times: user=0.04 sys=0.00, real=0.01 secs] 
2018-10-24T10:23:10.158+0200: 31805.668: [GC (Allocation Failure) 2018-10-24T10:23:10.158+0200: 31805.668: [ParNew: 212123K->1974K(235968K), 0.0066338 secs] 243873K->33724K(2070976K), 0.0067391 secs] [Times: user=0.04 sys=0.00, real=0.01 secs] 
Heap
 par new generation   total 235968K, used 40729K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
  eden space 209792K,  18% used [0x0000000080000000, 0x00000000825d8ab0, 0x000000008cce0000)
  from space 26176K,   7% used [0x000000008cce0000, 0x000000008cecdab0, 0x000000008e670000)
  to   space 26176K,   0% used [0x000000008e670000, 0x000000008e670000, 0x0000000090000000)
 concurrent mark-sweep generation total 1835008K, used 31749K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 25564K, capacity 25848K, committed 26160K, reserved 1073152K
  class space    used 2709K, capacity 2810K, committed 2864K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-secondarynamenode-omiprihdp03ap.mufep.net.log <==
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:290)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:202)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:184)
	at com.sun.proxy.$Proxy11.rollEditLog(Unknown Source)
	at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:522)
	at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:405)
	at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$1.run(SecondaryNameNode.java:371)
	at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:476)
	at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:367)
	at java.lang.Thread.run(Thread.java:748)
2018-10-29 11:31:00,115 INFO  namenode.SecondaryNameNode (SecondaryNameNode.java:run(465)) - Image has changed. Downloading updated image from NN.
2018-10-29 11:31:00,115 INFO  namenode.TransferFsImage (TransferFsImage.java:getFileClient(414)) - Opening connection to http://omiprihdp02ap.mufep.net:50070/imagetransfer?getimage=1&txid=191958&storageInfo=-63:732195773:...
2018-10-29 11:31:00,119 INFO  namenode.TransferFsImage (TransferFsImage.java:receiveFile(592)) - Combined time for fsimage download and fsync to all disks took 0.00s. The fsimage download took 0.00s at 138000.00 KB/s. Synchronous (fsync) write to disk of /hadoop/hdfs/namesecondary/current/fsimage.ckpt_0000000000000191958 took 0.00s.
2018-10-29 11:31:00,119 INFO  namenode.TransferFsImage (TransferFsImage.java:downloadImageToStorage(116)) - Downloaded file fsimage.ckpt_0000000000000191958 size 142071 bytes.
2018-10-29 11:31:00,120 INFO  namenode.TransferFsImage (TransferFsImage.java:getFileClient(414)) - Opening connection to http://omiprihdp02ap.mufep.net:50070/imagetransfer?getedit=1&startTxId=191959&endTxId=192022&storage...
2018-10-29 11:31:00,122 INFO  namenode.TransferFsImage (TransferFsImage.java:receiveFile(592)) - Combined time for fsimage download and fsync to all disks took 0.00s. The fsimage download took 0.00s at 8000.00 KB/s. Synchronous (fsync) write to disk of /hadoop/hdfs/namesecondary/current/edits_tmp_0000000000000191959-0000000000000192022_0000000000433577912 took 0.00s.
2018-10-29 11:31:00,122 INFO  namenode.TransferFsImage (TransferFsImage.java:downloadEditsToStorage(169)) - Downloaded file edits_tmp_0000000000000191959-0000000000000192022_0000000000433577912 size 0 bytes.
2018-10-29 11:31:00,133 INFO  namenode.FSImageFormatPBINode (FSImageFormatPBINode.java:loadINodeSection(257)) - Loading 1744 INodes.
2018-10-29 11:31:00,140 INFO  namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:load(184)) - Loaded FSImage in 0 seconds.
2018-10-29 11:31:00,140 INFO  namenode.FSImage (FSImage.java:loadFSImage(911)) - Loaded image for txid 191958 from /hadoop/hdfs/namesecondary/current/fsimage_0000000000000191958
2018-10-29 11:31:00,140 INFO  namenode.NameCache (NameCache.java:initialized(143)) - initialized with 3 entries 128 lookups
2018-10-29 11:31:00,141 INFO  namenode.Checkpointer (Checkpointer.java:rollForwardByApplyingLogs(313)) - Checkpointer about to load edits from 1 stream(s).
2018-10-29 11:31:00,141 INFO  namenode.FSImage (FSImage.java:loadEdits(849)) - Reading /hadoop/hdfs/namesecondary/current/edits_0000000000000191959-0000000000000192022 expecting start txid #191959
2018-10-29 11:31:00,141 INFO  namenode.FSImage (FSEditLogLoader.java:loadFSEdits(142)) - Start loading edits file /hadoop/hdfs/namesecondary/current/edits_0000000000000191959-0000000000000192022
2018-10-29 11:31:00,142 INFO  namenode.FSImage (FSEditLogLoader.java:loadFSEdits(145)) - Edits file /hadoop/hdfs/namesecondary/current/edits_0000000000000191959-0000000000000192022 of size 8272 edits # 64 loaded in 0 seconds
2018-10-29 11:31:00,142 INFO  namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:save(417)) - Saving image file /hadoop/hdfs/namesecondary/current/fsimage.ckpt_0000000000000192022 using no compression
2018-10-29 11:31:00,149 INFO  namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:save(421)) - Image file /hadoop/hdfs/namesecondary/current/fsimage.ckpt_0000000000000192022 of size 141653 bytes saved in 0 seconds .
2018-10-29 11:31:00,151 INFO  namenode.NNStorageRetentionManager (NNStorageRetentionManager.java:getImageTxIdToRetain(203)) - Going to retain 2 images with txid >= 191958
2018-10-29 11:31:00,151 INFO  namenode.NNStorageRetentionManager (NNStorageRetentionManager.java:purgeImage(225)) - Purging old image FSImageFile(file=/hadoop/hdfs/namesecondary/current/fsimage_0000000000000182244, cpktTxId=0000000000000182244)
2018-10-29 11:31:00,152 INFO  namenode.NNStorageRetentionManager (NNStorageRetentionManager.java:purgeImage(225)) - Purging old image FSImageFile(file=/hadoop/hdfs/namesecondary/current/fsimage_0000000000000172464, cpktTxId=0000000000000172464)
2018-10-29 11:31:00,157 INFO  namenode.TransferFsImage (TransferFsImage.java:copyFileToStream(395)) - Sending fileName: /hadoop/hdfs/namesecondary/current/fsimage_0000000000000192022, fileSize: 141653. Sent total: 141653 bytes. Size of last segment intended to send: -1 bytes.
2018-10-29 11:31:00,163 INFO  namenode.TransferFsImage (TransferFsImage.java:uploadImageFromStorage(238)) - Uploaded image with txid 192022 to namenode at http://omiprihdp02ap.mufep.net:50070 in 0.008 seconds
2018-10-29 11:31:00,163 WARN  namenode.SecondaryNameNode (SecondaryNameNode.java:doCheckpoint(576)) - Checkpoint done. New Image Size: 141653
2018-10-29 11:52:31,427 ERROR namenode.SecondaryNameNode (LogAdapter.java:error(69)) - RECEIVED SIGNAL 15: SIGTERM
2018-10-29 11:52:31,429 INFO  namenode.SecondaryNameNode (LogAdapter.java:info(45)) - SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down SecondaryNameNode at omiprihdp03ap.mufep.net/10.6.7.23
************************************************************/
==> /var/log/hadoop/hdfs/SecurityAuth.audit <==
==> /var/log/hadoop/hdfs/hdfs-audit.log <==
==> /var/log/hadoop/hdfs/gc.log-201810241115 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct  9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(195238760k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-10-24T11:15:34.052+0200: 1.648: [GC (Allocation Failure) 2018-10-24T11:15:34.052+0200: 1.648: [ParNew: 209792K->24380K(235968K), 0.0762337 secs] 209792K->40766K(2070976K), 0.0763600 secs] [Times: user=0.54 sys=0.02, real=0.07 secs] 
2018-10-24T11:16:36.116+0200: 63.712: [GC (CMS Initial Mark) [1 CMS-initial-mark: 16386K(1835008K)] 80153K(2070976K), 0.0080867 secs] [Times: user=0.03 sys=0.00, real=0.01 secs] 
2018-10-24T11:16:36.125+0200: 63.721: [CMS-concurrent-mark-start]
2018-10-24T11:16:36.129+0200: 63.725: [CMS-concurrent-mark: 0.005/0.005 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] 
2018-10-24T11:16:36.129+0200: 63.725: [CMS-concurrent-preclean-start]
2018-10-24T11:16:36.132+0200: 63.728: [CMS-concurrent-preclean: 0.003/0.003 secs] [Times: user=0.00 sys=0.00, real=0.01 secs] 
2018-10-24T11:16:36.132+0200: 63.728: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2018-10-24T11:16:41.173+0200: 68.769: [CMS-concurrent-abortable-preclean: 1.232/5.041 secs] [Times: user=1.23 sys=0.01, real=5.04 secs] 
2018-10-24T11:16:41.173+0200: 68.769: [GC (CMS Final Remark) [YG occupancy: 63767 K (235968 K)]2018-10-24T11:16:41.173+0200: 68.769: [Rescan (parallel) , 0.0077752 secs]2018-10-24T11:16:41.181+0200: 68.777: [weak refs processing, 0.0000230 secs]2018-10-24T11:16:41.181+0200: 68.777: [class unloading, 0.0029923 secs]2018-10-24T11:16:41.184+0200: 68.780: [scrub symbol table, 0.0040177 secs]2018-10-24T11:16:41.188+0200: 68.784: [scrub string table, 0.0003500 secs][1 CMS-remark: 16386K(1835008K)] 80153K(2070976K), 0.0158982 secs] [Times: user=0.07 sys=0.00, real=0.01 secs] 
2018-10-24T11:16:41.189+0200: 68.786: [CMS-concurrent-sweep-start]
2018-10-24T11:16:41.191+0200: 68.787: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.01 secs] 
2018-10-24T11:16:41.191+0200: 68.787: [CMS-concurrent-reset-start]
2018-10-24T11:16:41.199+0200: 68.795: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
Heap
 par new generation   total 235968K, used 107999K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
  eden space 209792K,  39% used [0x0000000080000000, 0x00000000851a8c30, 0x000000008cce0000)
  from space 26176K,  93% used [0x000000008e670000, 0x000000008fe3f2d0, 0x0000000090000000)
  to   space 26176K,   0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
 concurrent mark-sweep generation total 1835008K, used 16386K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 21555K, capacity 21808K, committed 22216K, reserved 1069056K
  class space    used 2411K, capacity 2488K, committed 2508K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-omiprihdp03ap.mufep.net.out.3 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
ulimit -a for user hdfs
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 768541
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201810241138 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct  9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(194548592k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-10-24T11:38:02.324+0200: 1.282: [GC (Allocation Failure) 2018-10-24T11:38:02.324+0200: 1.282: [ParNew: 209792K->24379K(235968K), 0.1062387 secs] 209792K->40765K(2070976K), 0.1063616 secs] [Times: user=0.79 sys=0.01, real=0.10 secs] 
2018-10-24T11:39:04.417+0200: 63.375: [GC (CMS Initial Mark) [1 CMS-initial-mark: 16386K(1835008K)] 78189K(2070976K), 0.0079674 secs] [Times: user=0.03 sys=0.00, real=0.01 secs] 
2018-10-24T11:39:04.425+0200: 63.383: [CMS-concurrent-mark-start]
2018-10-24T11:39:04.430+0200: 63.388: [CMS-concurrent-mark: 0.004/0.004 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] 
2018-10-24T11:39:04.430+0200: 63.388: [CMS-concurrent-preclean-start]
2018-10-24T11:39:04.432+0200: 63.390: [CMS-concurrent-preclean: 0.003/0.003 secs] [Times: user=0.00 sys=0.00, real=0.01 secs] 
2018-10-24T11:39:04.432+0200: 63.390: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2018-10-24T11:39:09.526+0200: 68.484: [CMS-concurrent-abortable-preclean: 1.252/5.094 secs] [Times: user=1.25 sys=0.01, real=5.09 secs] 
2018-10-24T11:39:09.526+0200: 68.485: [GC (CMS Final Remark) [YG occupancy: 61803 K (235968 K)]2018-10-24T11:39:09.527+0200: 68.485: [Rescan (parallel) , 0.0073957 secs]2018-10-24T11:39:09.534+0200: 68.492: [weak refs processing, 0.0000243 secs]2018-10-24T11:39:09.534+0200: 68.492: [class unloading, 0.0031692 secs]2018-10-24T11:39:09.537+0200: 68.495: [scrub symbol table, 0.0045382 secs]2018-10-24T11:39:09.542+0200: 68.500: [scrub string table, 0.0004086 secs][1 CMS-remark: 16386K(1835008K)] 78189K(2070976K), 0.0161897 secs] [Times: user=0.07 sys=0.00, real=0.02 secs] 
2018-10-24T11:39:09.543+0200: 68.501: [CMS-concurrent-sweep-start]
2018-10-24T11:39:09.544+0200: 68.502: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
2018-10-24T11:39:09.544+0200: 68.502: [CMS-concurrent-reset-start]
2018-10-24T11:39:09.552+0200: 68.510: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00 sys=0.01, real=0.01 secs] 
2018-10-24T13:29:03.258+0200: 6662.216: [GC (Allocation Failure) 2018-10-24T13:29:03.258+0200: 6662.216: [ParNew: 234171K->5658K(235968K), 0.0520074 secs] 250557K->37556K(2070976K), 0.0521378 secs] [Times: user=0.32 sys=0.02, real=0.05 secs] 
Heap
 par new generation   total 235968K, used 31930K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
  eden space 209792K,  12% used [0x0000000080000000, 0x00000000819a7c18, 0x000000008cce0000)
  from space 26176K,  21% used [0x000000008cce0000, 0x000000008d266bf8, 0x000000008e670000)
  to   space 26176K,   0% used [0x000000008e670000, 0x000000008e670000, 0x0000000090000000)
 concurrent mark-sweep generation total 1835008K, used 31897K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 21912K, capacity 22192K, committed 22472K, reserved 1069056K
  class space    used 2411K, capacity 2488K, committed 2508K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291230 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct  9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184369936k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-10-29T12:30:17.616+0200: 1.293: [GC (Allocation Failure) 2018-10-29T12:30:17.617+0200: 1.294: [ParNew: 209792K->14145K(235968K), 0.0123403 secs] 209792K->14145K(2070976K), 0.0130008 secs] [Times: user=0.05 sys=0.01, real=0.01 secs] 
Heap
 par new generation   total 235968K, used 98508K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
  eden space 209792K,  40% used [0x0000000080000000, 0x0000000085262e48, 0x000000008cce0000)
  from space 26176K,  54% used [0x000000008e670000, 0x000000008f440540, 0x0000000090000000)
  to   space 26176K,   0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
 concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 21423K, capacity 21686K, committed 21960K, reserved 1069056K
  class space    used 2436K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291303 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct  9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184368176k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-10-29T13:03:23.130+0200: 1.185: [GC (Allocation Failure) 2018-10-29T13:03:23.130+0200: 1.186: [ParNew: 209792K->14144K(235968K), 0.0145294 secs] 209792K->14144K(2070976K), 0.0146526 secs] [Times: user=0.06 sys=0.01, real=0.02 secs] 
Heap
 par new generation   total 235968K, used 98508K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
  eden space 209792K,  40% used [0x0000000080000000, 0x0000000085262df0, 0x000000008cce0000)
  from space 26176K,  54% used [0x000000008e670000, 0x000000008f440228, 0x0000000090000000)
  to   space 26176K,   0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
 concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 21416K, capacity 21686K, committed 21960K, reserved 1069056K
  class space    used 2440K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291319 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct  9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184368312k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-10-29T13:19:17.693+0200: 1.250: [GC (Allocation Failure) 2018-10-29T13:19:17.693+0200: 1.251: [ParNew: 209792K->14138K(235968K), 0.0131332 secs] 209792K->14138K(2070976K), 0.0137785 secs] [Times: user=0.05 sys=0.01, real=0.01 secs] 
Heap
 par new generation   total 235968K, used 98501K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
  eden space 209792K,  40% used [0x0000000080000000, 0x0000000085262d80, 0x000000008cce0000)
  from space 26176K,  54% used [0x000000008e670000, 0x000000008f43ea70, 0x0000000090000000)
  to   space 26176K,   0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
 concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 21425K, capacity 21686K, committed 21960K, reserved 1069056K
  class space    used 2436K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291339 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct  9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184376140k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-10-29T13:39:49.252+0200: 1.223: [GC (Allocation Failure) 2018-10-29T13:39:49.252+0200: 1.223: [ParNew: 209792K->14144K(235968K), 0.0143870 secs] 209792K->14144K(2070976K), 0.0151843 secs] [Times: user=0.05 sys=0.02, real=0.01 secs] 
Heap
 par new generation   total 235968K, used 96410K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
  eden space 209792K,  39% used [0x0000000080000000, 0x00000000850566c8, 0x000000008cce0000)
  from space 26176K,  54% used [0x000000008e670000, 0x000000008f4403d0, 0x0000000090000000)
  to   space 26176K,   0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
 concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 21422K, capacity 21686K, committed 21960K, reserved 1069056K
  class space    used 2436K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810241341 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct  9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(187112732k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-10-24T13:41:25.777+0200: 1.302: [GC (Allocation Failure) 2018-10-24T13:41:25.777+0200: 1.302: [ParNew: 209792K->24580K(235968K), 0.1099037 secs] 209792K->40966K(2070976K), 0.1100410 secs] [Times: user=0.87 sys=0.02, real=0.11 secs] 
2018-10-24T13:42:27.873+0200: 63.398: [GC (CMS Initial Mark) [1 CMS-initial-mark: 16386K(1835008K)] 84414K(2070976K), 0.0079132 secs] [Times: user=0.03 sys=0.00, real=0.01 secs] 
2018-10-24T13:42:27.881+0200: 63.406: [CMS-concurrent-mark-start]
2018-10-24T13:42:27.885+0200: 63.410: [CMS-concurrent-mark: 0.004/0.004 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] 
2018-10-24T13:42:27.885+0200: 63.410: [CMS-concurrent-preclean-start]
2018-10-24T13:42:27.888+0200: 63.413: [CMS-concurrent-preclean: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
2018-10-24T13:42:27.888+0200: 63.413: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2018-10-24T13:42:32.970+0200: 68.495: [CMS-concurrent-abortable-preclean: 1.270/5.082 secs] [Times: user=1.27 sys=0.01, real=5.08 secs] 
2018-10-24T13:42:32.970+0200: 68.495: [GC (CMS Final Remark) [YG occupancy: 68028 K (235968 K)]2018-10-24T13:42:32.970+0200: 68.495: [Rescan (parallel) , 0.0077338 secs]2018-10-24T13:42:32.978+0200: 68.503: [weak refs processing, 0.0000556 secs]2018-10-24T13:42:32.978+0200: 68.503: [class unloading, 0.0032177 secs]2018-10-24T13:42:32.981+0200: 68.506: [scrub symbol table, 0.0043939 secs]2018-10-24T13:42:32.986+0200: 68.510: [scrub string table, 0.0003463 secs][1 CMS-remark: 16386K(1835008K)] 84414K(2070976K), 0.0164618 secs] [Times: user=0.06 sys=0.00, real=0.02 secs] 
2018-10-24T13:42:32.987+0200: 68.512: [CMS-concurrent-sweep-start]
2018-10-24T13:42:32.988+0200: 68.512: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
2018-10-24T13:42:32.988+0200: 68.512: [CMS-concurrent-reset-start]
2018-10-24T13:42:32.995+0200: 68.520: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00 sys=0.00, real=0.01 secs] 
Heap
 par new generation   total 235968K, used 133373K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
  eden space 209792K,  51% used [0x0000000080000000, 0x0000000086a3e7a8, 0x000000008cce0000)
  from space 26176K,  93% used [0x000000008e670000, 0x000000008fe71020, 0x0000000090000000)
  to   space 26176K,   0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
 concurrent mark-sweep generation total 1835008K, used 16386K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 21630K, capacity 21880K, committed 22140K, reserved 1069056K
  class space    used 2412K, capacity 2488K, committed 2508K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291213 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct  9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184468500k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-10-29T12:13:38.797+0200: 1.264: [GC (Allocation Failure) 2018-10-29T12:13:38.798+0200: 1.264: [ParNew: 209792K->14147K(235968K), 0.0127328 secs] 209792K->14147K(2070976K), 0.0133953 secs] [Times: user=0.05 sys=0.01, real=0.01 secs] 
Heap
 par new generation   total 235968K, used 96412K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
  eden space 209792K,  39% used [0x0000000080000000, 0x0000000085056658, 0x000000008cce0000)
  from space 26176K,  54% used [0x000000008e670000, 0x000000008f440d80, 0x0000000090000000)
  to   space 26176K,   0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
 concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 21422K, capacity 21686K, committed 21960K, reserved 1069056K
  class space    used 2436K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-omiprihdp03ap.mufep.net.out.5 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
ulimit -a for user hdfs
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 768541
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-omiprihdp03ap.mufep.net.out.1 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
ulimit -a for user hdfs
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 768541
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201810241423 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct  9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(173589320k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-10-24T14:23:45.451+0200: 1.342: [GC (Allocation Failure) 2018-10-24T14:23:45.451+0200: 1.342: [ParNew: 209792K->24587K(235968K), 0.0642295 secs] 209792K->40973K(2070976K), 0.0643522 secs] [Times: user=0.52 sys=0.02, real=0.06 secs] 
2018-10-24T14:24:47.494+0200: 63.385: [GC (CMS Initial Mark) [1 CMS-initial-mark: 16386K(1835008K)] 84334K(2070976K), 0.0082104 secs] [Times: user=0.03 sys=0.01, real=0.01 secs] 
2018-10-24T14:24:47.502+0200: 63.393: [CMS-concurrent-mark-start]
2018-10-24T14:24:47.505+0200: 63.396: [CMS-concurrent-mark: 0.004/0.004 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] 
2018-10-24T14:24:47.505+0200: 63.396: [CMS-concurrent-preclean-start]
2018-10-24T14:24:47.508+0200: 63.399: [CMS-concurrent-preclean: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
2018-10-24T14:24:47.508+0200: 63.399: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2018-10-24T14:24:52.564+0200: 68.455: [CMS-concurrent-abortable-preclean: 1.249/5.056 secs] [Times: user=1.25 sys=0.00, real=5.06 secs] 
2018-10-24T14:24:52.564+0200: 68.455: [GC (CMS Final Remark) [YG occupancy: 67948 K (235968 K)]2018-10-24T14:24:52.564+0200: 68.455: [Rescan (parallel) , 0.0076604 secs]2018-10-24T14:24:52.572+0200: 68.463: [weak refs processing, 0.0000248 secs]2018-10-24T14:24:52.572+0200: 68.463: [class unloading, 0.0028079 secs]2018-10-24T14:24:52.575+0200: 68.466: [scrub symbol table, 0.0036138 secs]2018-10-24T14:24:52.578+0200: 68.469: [scrub string table, 0.0003101 secs][1 CMS-remark: 16386K(1835008K)] 84334K(2070976K), 0.0150859 secs] [Times: user=0.07 sys=0.00, real=0.01 secs] 
2018-10-24T14:24:52.579+0200: 68.470: [CMS-concurrent-sweep-start]
2018-10-24T14:24:52.579+0200: 68.470: [CMS-concurrent-sweep: 0.000/0.000 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
2018-10-24T14:24:52.579+0200: 68.470: [CMS-concurrent-reset-start]
2018-10-24T14:24:52.588+0200: 68.479: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00 sys=0.00, real=0.01 secs] 
Heap
 par new generation   total 235968K, used 117116K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
  eden space 209792K,  44% used [0x0000000080000000, 0x0000000085a5c380, 0x000000008cce0000)
  from space 26176K,  93% used [0x000000008e670000, 0x000000008fe72e58, 0x0000000090000000)
  to   space 26176K,   0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
 concurrent mark-sweep generation total 1835008K, used 16386K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 21576K, capacity 21848K, committed 22140K, reserved 1069056K
  class space    used 2405K, capacity 2456K, committed 2508K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-secondarynamenode-omiprihdp03ap.mufep.net.out.5 <==
ulimit -a for user hdfs
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 768541
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-omiprihdp03ap.mufep.net.log <==
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1778)
	Number of suppressed write-lock reports: 0
	Longest write-lock held interval: 0
2018-10-29 14:06:43,106 INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1302)) - Stopping services started for active state
2018-10-29 14:06:43,106 INFO  namenode.FSNamesystem (FSNamesystem.java:writeUnlock(1689)) - FSNamesystem write lock held for 0 ms via
java.lang.Thread.getStackTrace(Thread.java:1559)
org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:945)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1690)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1339)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.close(FSNamesystem.java:1760)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:918)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:716)
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:697)
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:761)
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:1001)
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:985)
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1710)
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1778)
	Number of suppressed write-lock reports: 0
	Longest write-lock held interval: 0
2018-10-29 14:06:43,106 INFO  namenode.FSNamesystem (FSNamesystem.java:stopStandbyServices(1392)) - Stopping services started for standby state
2018-10-29 14:06:43,106 ERROR namenode.NameNode (NameNode.java:main(1783)) - Failed to start namenode.
java.lang.IllegalStateException: Could not determine own NN ID in namespace 'OmiHdpPrdCluster'. Please ensure that this node is one of the machines listed as an NN RPC address, or configure dfs.ha.namenode.id
	at com.google.common.base.Preconditions.checkState(Preconditions.java:172)
	at org.apache.hadoop.hdfs.HAUtil.getNameNodeIdOfOtherNode(HAUtil.java:164)
	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createBlockTokenSecretManager(BlockManager.java:442)
	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.<init>(BlockManager.java:334)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:781)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:716)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:697)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:761)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:1001)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:985)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1710)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1778)
2018-10-29 14:06:43,107 INFO  util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1
2018-10-29 14:06:43,108 INFO  namenode.NameNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at omiprihdp03ap.mufep.net/10.6.7.23
************************************************************/
==> /var/log/hadoop/hdfs/gc.log-201810291223 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct  9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184366120k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-10-29T12:23:08.681+0200: 1.264: [GC (Allocation Failure) 2018-10-29T12:23:08.681+0200: 1.264: [ParNew: 209792K->14139K(235968K), 0.0126677 secs] 209792K->14139K(2070976K), 0.0128856 secs] [Times: user=0.04 sys=0.01, real=0.02 secs] 
Heap
 par new generation   total 235968K, used 100601K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
  eden space 209792K,  41% used [0x0000000080000000, 0x000000008546f5c8, 0x000000008cce0000)
  from space 26176K,  54% used [0x000000008e670000, 0x000000008f43ee78, 0x0000000090000000)
  to   space 26176K,   0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
 concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 21418K, capacity 21686K, committed 21960K, reserved 1069056K
  class space    used 2436K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291325 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct  9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184371044k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-10-29T13:25:43.642+0200: 1.242: [GC (Allocation Failure) 2018-10-29T13:25:43.643+0200: 1.243: [ParNew: 209792K->14145K(235968K), 0.0125742 secs] 209792K->14145K(2070976K), 0.0133519 secs] [Times: user=0.05 sys=0.01, real=0.01 secs] 
Heap
 par new generation   total 235968K, used 96411K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
  eden space 209792K,  39% used [0x0000000080000000, 0x0000000085056780, 0x000000008cce0000)
  from space 26176K,  54% used [0x000000008e670000, 0x000000008f4405c8, 0x0000000090000000)
  to   space 26176K,   0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
 concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 21419K, capacity 21686K, committed 21960K, reserved 1069056K
  class space    used 2436K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291341 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct  9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184375248k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-10-29T13:41:15.377+0200: 1.251: [GC (Allocation Failure) 2018-10-29T13:41:15.377+0200: 1.251: [ParNew: 209792K->14131K(235968K), 0.0121842 secs] 209792K->14131K(2070976K), 0.0123911 secs] [Times: user=0.05 sys=0.01, real=0.01 secs] 
Heap
 par new generation   total 235968K, used 96396K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
  eden space 209792K,  39% used [0x0000000080000000, 0x00000000850564f0, 0x000000008cce0000)
  from space 26176K,  53% used [0x000000008e670000, 0x000000008f43cc20, 0x0000000090000000)
  to   space 26176K,   0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
 concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 21424K, capacity 21686K, committed 21960K, reserved 1069056K
  class space    used 2436K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810241449 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct  9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(171667804k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-10-24T14:49:15.104+0200: 1.257: [GC (Allocation Failure) 2018-10-24T14:49:15.105+0200: 1.257: [ParNew: 209792K->24576K(235968K), 0.0926388 secs] 209792K->40962K(2070976K), 0.0927928 secs] [Times: user=0.74 sys=0.02, real=0.09 secs] 
2018-10-24T14:50:17.185+0200: 63.337: [GC (CMS Initial Mark) [1 CMS-initial-mark: 16386K(1835008K)] 84159K(2070976K), 0.0077603 secs] [Times: user=0.03 sys=0.00, real=0.01 secs] 
2018-10-24T14:50:17.193+0200: 63.345: [CMS-concurrent-mark-start]
2018-10-24T14:50:17.197+0200: 63.349: [CMS-concurrent-mark: 0.004/0.004 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] 
2018-10-24T14:50:17.197+0200: 63.349: [CMS-concurrent-preclean-start]
2018-10-24T14:50:17.199+0200: 63.352: [CMS-concurrent-preclean: 0.003/0.003 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
2018-10-24T14:50:17.199+0200: 63.352: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2018-10-24T14:50:22.275+0200: 68.427: [CMS-concurrent-abortable-preclean: 1.264/5.076 secs] [Times: user=1.26 sys=0.01, real=5.08 secs] 
2018-10-24T14:50:22.276+0200: 68.428: [GC (CMS Final Remark) [YG occupancy: 67773 K (235968 K)]2018-10-24T14:50:22.276+0200: 68.428: [Rescan (parallel) , 0.0076826 secs]2018-10-24T14:50:22.283+0200: 68.436: [weak refs processing, 0.0000249 secs]2018-10-24T14:50:22.283+0200: 68.436: [class unloading, 0.0028875 secs]2018-10-24T14:50:22.286+0200: 68.439: [scrub symbol table, 0.0038950 secs]2018-10-24T14:50:22.290+0200: 68.442: [scrub string table, 0.0003330 secs][1 CMS-remark: 16386K(1835008K)] 84159K(2070976K), 0.0156261 secs] [Times: user=0.07 sys=0.00, real=0.02 secs] 
2018-10-24T14:50:22.291+0200: 68.444: [CMS-concurrent-sweep-start]
2018-10-24T14:50:22.291+0200: 68.444: [CMS-concurrent-sweep: 0.000/0.000 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
2018-10-24T14:50:22.291+0200: 68.444: [CMS-concurrent-reset-start]
2018-10-24T14:50:22.300+0200: 68.452: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
Heap
 par new generation   total 235968K, used 199094K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
  eden space 209792K,  83% used [0x0000000080000000, 0x000000008aa6db80, 0x000000008cce0000)
  from space 26176K,  93% used [0x000000008e670000, 0x000000008fe70040, 0x0000000090000000)
  to   space 26176K,   0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
 concurrent mark-sweep generation total 1835008K, used 16386K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 21750K, capacity 21976K, committed 22216K, reserved 1069056K
  class space    used 2405K, capacity 2456K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291323 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct  9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184371504k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-10-29T13:23:56.564+0200: 1.250: [GC (Allocation Failure) 2018-10-29T13:23:56.564+0200: 1.251: [ParNew: 209792K->14156K(235968K), 0.0121555 secs] 209792K->14156K(2070976K), 0.0122764 secs] [Times: user=0.06 sys=0.01, real=0.01 secs] 
Heap
 par new generation   total 235968K, used 96421K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
  eden space 209792K,  39% used [0x0000000080000000, 0x0000000085056608, 0x000000008cce0000)
  from space 26176K,  54% used [0x000000008e670000, 0x000000008f443158, 0x0000000090000000)
  to   space 26176K,   0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
 concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 21424K, capacity 21686K, committed 21960K, reserved 1069056K
  class space    used 2436K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-omiprihdp03ap.mufep.net.out.2 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
ulimit -a for user hdfs
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 768541
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201810241610 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct  9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(167802056k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-10-24T16:10:41.265+0200: 1.328: [GC (Allocation Failure) 2018-10-24T16:10:41.265+0200: 1.328: [ParNew: 209792K->24590K(235968K), 0.1116302 secs] 209792K->40976K(2070976K), 0.1117583 secs] [Times: user=0.88 sys=0.02, real=0.11 secs] 
2018-10-24T16:11:43.354+0200: 63.417: [GC (CMS Initial Mark) [1 CMS-initial-mark: 16386K(1835008K)] 84338K(2070976K), 0.0077304 secs] [Times: user=0.03 sys=0.01, real=0.01 secs] 
2018-10-24T16:11:43.362+0200: 63.425: [CMS-concurrent-mark-start]
2018-10-24T16:11:43.366+0200: 63.429: [CMS-concurrent-mark: 0.004/0.004 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
2018-10-24T16:11:43.366+0200: 63.429: [CMS-concurrent-preclean-start]
2018-10-24T16:11:43.368+0200: 63.431: [CMS-concurrent-preclean: 0.002/0.002 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] 
2018-10-24T16:11:43.368+0200: 63.431: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2018-10-24T16:11:48.437+0200: 68.500: [CMS-concurrent-abortable-preclean: 1.257/5.069 secs] [Times: user=1.26 sys=0.00, real=5.07 secs] 
2018-10-24T16:11:48.438+0200: 68.501: [GC (CMS Final Remark) [YG occupancy: 67952 K (235968 K)]2018-10-24T16:11:48.438+0200: 68.501: [Rescan (parallel) , 0.0075482 secs]2018-10-24T16:11:48.445+0200: 68.508: [weak refs processing, 0.0000230 secs]2018-10-24T16:11:48.445+0200: 68.508: [class unloading, 0.0031608 secs]2018-10-24T16:11:48.448+0200: 68.511: [scrub symbol table, 0.0040634 secs]2018-10-24T16:11:48.452+0200: 68.515: [scrub string table, 0.0003496 secs][1 CMS-remark: 16386K(1835008K)] 84338K(2070976K), 0.0157823 secs] [Times: user=0.06 sys=0.00, real=0.02 secs] 
2018-10-24T16:11:48.454+0200: 68.517: [CMS-concurrent-sweep-start]
2018-10-24T16:11:48.455+0200: 68.518: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
2018-10-24T16:11:48.455+0200: 68.518: [CMS-concurrent-reset-start]
2018-10-24T16:11:48.463+0200: 68.526: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00 sys=0.01, real=0.01 secs] 
Heap
 par new generation   total 235968K, used 145259K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
  eden space 209792K,  57% used [0x0000000080000000, 0x00000000875d7378, 0x000000008cce0000)
  from space 26176K,  93% used [0x000000008e670000, 0x000000008fe73bb0, 0x0000000090000000)
  to   space 26176K,   0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
 concurrent mark-sweep generation total 1835008K, used 16386K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 21658K, capacity 21916K, committed 22140K, reserved 1069056K
  class space    used 2405K, capacity 2456K, committed 2508K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291249 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct  9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184383344k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-10-29T12:49:16.103+0200: 1.239: [GC (Allocation Failure) 2018-10-29T12:49:16.104+0200: 1.239: [ParNew: 209792K->14140K(235968K), 0.0116747 secs] 209792K->14140K(2070976K), 0.0123039 secs] [Times: user=0.06 sys=0.00, real=0.01 secs] 
Heap
 par new generation   total 235968K, used 98503K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
  eden space 209792K,  40% used [0x0000000080000000, 0x0000000085262d80, 0x000000008cce0000)
  from space 26176K,  54% used [0x000000008e670000, 0x000000008f43f0a8, 0x0000000090000000)
  to   space 26176K,   0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
 concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 21420K, capacity 21686K, committed 21960K, reserved 1069056K
  class space    used 2436K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291304 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct  9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184374808k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-10-29T13:04:04.008+0200: 1.269: [GC (Allocation Failure) 2018-10-29T13:04:04.008+0200: 1.269: [ParNew: 209792K->14150K(235968K), 0.0130799 secs] 209792K->14150K(2070976K), 0.0132757 secs] [Times: user=0.06 sys=0.00, real=0.02 secs] 
Heap
 par new generation   total 235968K, used 96416K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
  eden space 209792K,  39% used [0x0000000080000000, 0x0000000085056940, 0x000000008cce0000)
  from space 26176K,  54% used [0x000000008e670000, 0x000000008f4419d8, 0x0000000090000000)
  to   space 26176K,   0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
 concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 21423K, capacity 21686K, committed 21960K, reserved 1069056K
  class space    used 2436K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-omiprihdp03ap.mufep.net.out.4 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
ulimit -a for user hdfs
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 768541
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201810291406 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct  9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184367832k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-10-29T14:06:42.892+0200: 1.229: [GC (Allocation Failure) 2018-10-29T14:06:42.892+0200: 1.230: [ParNew: 209792K->14150K(235968K), 0.0118410 secs] 209792K->14150K(2070976K), 0.0124932 secs] [Times: user=0.04 sys=0.01, real=0.01 secs] 
Heap
 par new generation   total 235968K, used 98504K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
  eden space 209792K,  40% used [0x0000000080000000, 0x0000000085260788, 0x000000008cce0000)
  from space 26176K,  54% used [0x000000008e670000, 0x000000008f441b80, 0x0000000090000000)
  to   space 26176K,   0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
 concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 21418K, capacity 21686K, committed 21960K, reserved 1069056K
  class space    used 2436K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810241655 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct  9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(167319644k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-10-24T16:55:52.756+0200: 1.334: [GC (Allocation Failure) 2018-10-24T16:55:52.756+0200: 1.334: [ParNew: 209792K->24951K(235968K), 0.1101571 secs] 209792K->41337K(2070976K), 0.1102825 secs] [Times: user=0.88 sys=0.02, real=0.11 secs] 
2018-10-24T16:56:54.843+0200: 63.421: [GC (CMS Initial Mark) [1 CMS-initial-mark: 16386K(1835008K)] 88644K(2070976K), 0.0076252 secs] [Times: user=0.03 sys=0.00, real=0.01 secs] 
2018-10-24T16:56:54.851+0200: 63.429: [CMS-concurrent-mark-start]
2018-10-24T16:56:54.855+0200: 63.432: [CMS-concurrent-mark: 0.004/0.004 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
2018-10-24T16:56:54.855+0200: 63.432: [CMS-concurrent-preclean-start]
2018-10-24T16:56:54.857+0200: 63.435: [CMS-concurrent-preclean: 0.002/0.002 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] 
2018-10-24T16:56:54.857+0200: 63.435: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2018-10-24T16:56:59.948+0200: 68.526: [CMS-concurrent-abortable-preclean: 1.382/5.091 secs] [Times: user=1.38 sys=0.00, real=5.09 secs] 
2018-10-24T16:56:59.948+0200: 68.526: [GC (CMS Final Remark) [YG occupancy: 72257 K (235968 K)]2018-10-24T16:56:59.948+0200: 68.526: [Rescan (parallel) , 0.0077662 secs]2018-10-24T16:56:59.956+0200: 68.534: [weak refs processing, 0.0000287 secs]2018-10-24T16:56:59.956+0200: 68.534: [class unloading, 0.0035862 secs]2018-10-24T16:56:59.960+0200: 68.538: [scrub symbol table, 0.0051221 secs]2018-10-24T16:56:59.965+0200: 68.543: [scrub string table, 0.0004291 secs][1 CMS-remark: 16386K(1835008K)] 88644K(2070976K), 0.0176956 secs] [Times: user=0.07 sys=0.00, real=0.02 secs] 
2018-10-24T16:56:59.966+0200: 68.544: [CMS-concurrent-sweep-start]
2018-10-24T16:56:59.967+0200: 68.545: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
2018-10-24T16:56:59.967+0200: 68.545: [CMS-concurrent-reset-start]
2018-10-24T16:56:59.975+0200: 68.553: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00 sys=0.01, real=0.01 secs] 
2018-10-24T17:54:53.950+0200: 3542.527: [GC (Allocation Failure) 2018-10-24T17:54:53.950+0200: 3542.528: [ParNew: 234743K->7941K(235968K), 0.0690527 secs] 251129K->40213K(2070976K), 0.0691659 secs] [Times: user=0.44 sys=0.05, real=0.07 secs] 
2018-10-24T20:53:55.037+0200: 14283.615: [GC (Allocation Failure) 2018-10-24T20:53:55.037+0200: 14283.615: [ParNew: 217733K->2849K(235968K), 0.0067938 secs] 250005K->35121K(2070976K), 0.0068772 secs] [Times: user=0.04 sys=0.00, real=0.01 secs] 
Heap
 par new generation   total 235968K, used 96286K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
  eden space 209792K,  44% used [0x0000000080000000, 0x0000000085b3f620, 0x000000008cce0000)
  from space 26176K,  10% used [0x000000008e670000, 0x000000008e9384e8, 0x0000000090000000)
  to   space 26176K,   0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
 concurrent mark-sweep generation total 1835008K, used 32272K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 26269K, capacity 26592K, committed 26952K, reserved 1073152K
  class space    used 2758K, capacity 2842K, committed 2944K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291313 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct  9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184374180k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-10-29T13:13:31.118+0200: 1.259: [GC (Allocation Failure) 2018-10-29T13:13:31.119+0200: 1.259: [ParNew: 209792K->14143K(235968K), 0.0125834 secs] 209792K->14143K(2070976K), 0.0127867 secs] [Times: user=0.05 sys=0.00, real=0.02 secs] 
Heap
 par new generation   total 235968K, used 96409K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
  eden space 209792K,  39% used [0x0000000080000000, 0x0000000085056878, 0x000000008cce0000)
  from space 26176K,  54% used [0x000000008e670000, 0x000000008f43fd18, 0x0000000090000000)
  to   space 26176K,   0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
 concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 21423K, capacity 21686K, committed 21960K, reserved 1069056K
  class space    used 2436K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-omiprihdp03ap.mufep.net.out <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
ulimit -a for user hdfs
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 768541
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201810242222 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct  9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(165169792k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-10-24T22:22:18.570+0200: 1.304: [GC (Allocation Failure) 2018-10-24T22:22:18.570+0200: 1.304: [ParNew: 209792K->24934K(235968K), 0.0872132 secs] 209792K->41320K(2070976K), 0.0873404 secs] [Times: user=0.67 sys=0.02, real=0.09 secs] 
2018-10-24T22:23:20.636+0200: 63.370: [GC (CMS Initial Mark) [1 CMS-initial-mark: 16386K(1835008K)] 163007K(2070976K), 0.0111701 secs] [Times: user=0.07 sys=0.00, real=0.01 secs] 
2018-10-24T22:23:20.648+0200: 63.381: [CMS-concurrent-mark-start]
2018-10-24T22:23:20.651+0200: 63.385: [CMS-concurrent-mark: 0.003/0.003 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] 
2018-10-24T22:23:20.651+0200: 63.385: [CMS-concurrent-preclean-start]
2018-10-24T22:23:20.656+0200: 63.390: [CMS-concurrent-preclean: 0.005/0.005 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
2018-10-24T22:23:20.656+0200: 63.390: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2018-10-24T22:23:25.736+0200: 68.469: [CMS-concurrent-abortable-preclean: 1.325/5.079 secs] [Times: user=1.33 sys=0.00, real=5.08 secs] 
2018-10-24T22:23:25.736+0200: 68.470: [GC (CMS Final Remark) [YG occupancy: 147320 K (235968 K)]2018-10-24T22:23:25.736+0200: 68.470: [Rescan (parallel) , 0.0113383 secs]2018-10-24T22:23:25.747+0200: 68.481: [weak refs processing, 0.0000283 secs]2018-10-24T22:23:25.747+0200: 68.481: [class unloading, 0.0044830 secs]2018-10-24T22:23:25.752+0200: 68.486: [scrub symbol table, 0.0055488 secs]2018-10-24T22:23:25.757+0200: 68.491: [scrub string table, 0.0004413 secs][1 CMS-remark: 16386K(1835008K)] 163706K(2070976K), 0.0229673 secs] [Times: user=0.10 sys=0.00, real=0.02 secs] 
2018-10-24T22:23:25.759+0200: 68.493: [CMS-concurrent-sweep-start]
2018-10-24T22:23:25.760+0200: 68.493: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
2018-10-24T22:23:25.760+0200: 68.493: [CMS-concurrent-reset-start]
2018-10-24T22:23:25.768+0200: 68.502: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00 sys=0.01, real=0.01 secs] 
2018-10-24T23:23:19.714+0200: 3662.447: [GC (Allocation Failure) 2018-10-24T23:23:19.714+0200: 3662.448: [ParNew: 234726K->7900K(235968K), 0.0647148 secs] 251112K->40176K(2070976K), 0.0648297 secs] [Times: user=0.38 sys=0.07, real=0.06 secs] 
2018-10-25T02:20:20.884+0200: 14283.617: [GC (Allocation Failure) 2018-10-25T02:20:20.884+0200: 14283.617: [ParNew: 217692K->2674K(235968K), 0.0091647 secs] 249968K->34949K(2070976K), 0.0092651 secs] [Times: user=0.06 sys=0.01, real=0.01 secs] 
2018-10-25T05:06:22.016+0200: 24244.750: [GC (Allocation Failure) 2018-10-25T05:06:22.016+0200: 24244.750: [ParNew: 212466K->2441K(235968K), 0.0073466 secs] 244741K->34717K(2070976K), 0.0074268 secs] [Times: user=0.03 sys=0.01, real=0.01 secs] 
2018-10-25T09:35:23.400+0200: 40386.134: [GC (Allocation Failure) 2018-10-25T09:35:23.400+0200: 40386.134: [ParNew: 212233K->2534K(235968K), 0.0068900 secs] 244509K->34809K(2070976K), 0.0069641 secs] [Times: user=0.04 sys=0.00, real=0.01 secs] 
Heap
 par new generation   total 235968K, used 99362K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
  eden space 209792K,  46% used [0x0000000080000000, 0x0000000085e8f098, 0x000000008cce0000)
  from space 26176K,   9% used [0x000000008e670000, 0x000000008e8e99e8, 0x0000000090000000)
  to   space 26176K,   0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
 concurrent mark-sweep generation total 1835008K, used 32275K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 26578K, capacity 26848K, committed 27224K, reserved 1073152K
  class space    used 2754K, capacity 2842K, committed 2916K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-secondarynamenode-omiprihdp03ap.mufep.net.out.4 <==
ulimit -a for user hdfs
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 768541
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-secondarynamenode-omiprihdp03ap.mufep.net.out.3 <==
ulimit -a for user hdfs
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 768541
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-secondarynamenode-omiprihdp03ap.mufep.net.out.2 <==
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RetriableException): NameNode still not started
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.checkNNStartup(NameNodeRpcServer.java:2082)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getTransactionID(NameNodeRpcServer.java:1229)
	at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.getTransactionId(NamenodeProtocolServerSideTranslatorPB.java:118)
	at org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:12832)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)

	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1554)
	at org.apache.hadoop.ipc.Client.call(Client.java:1498)
	at org.apache.hadoop.ipc.Client.call(Client.java:1398)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
	at com.sun.proxy.$Proxy10.getTransactionId(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.getTransactionID(NamenodeProtocolTranslatorPB.java:130)
	at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:290)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:202)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:184)
	at com.sun.proxy.$Proxy11.getTransactionID(Unknown Source)
	at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.countUncheckpointedTxns(SecondaryNameNode.java:651)
	at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.shouldCheckpointBasedOnCount(SecondaryNameNode.java:659)
	at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:403)
	at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$1.run(SecondaryNameNode.java:371)
	at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:476)
	at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:367)
	at java.lang.Thread.run(Thread.java:748)
==> /var/log/hadoop/hdfs/hadoop-hdfs-secondarynamenode-omiprihdp03ap.mufep.net.out.1 <==
ulimit -a for user hdfs
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 768541
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-secondarynamenode-omiprihdp03ap.mufep.net.out <==
	at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:405)
	at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$1.run(SecondaryNameNode.java:371)
	at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:476)
	at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:367)
	at java.lang.Thread.run(Thread.java:748)
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Log not rolled. Name node is in safe mode.
It was turned on manually. Use "hdfs dfsadmin -safemode leave" to turn safe mode off.
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1422)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:6309)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEditLog(NameNodeRpcServer.java:1247)
	at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.rollEditLog(NamenodeProtocolServerSideTranslatorPB.java:144)
	at org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:12836)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)

	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1554)
	at org.apache.hadoop.ipc.Client.call(Client.java:1498)
	at org.apache.hadoop.ipc.Client.call(Client.java:1398)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
	at com.sun.proxy.$Proxy10.rollEditLog(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:150)
	at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:290)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:202)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:184)
	at com.sun.proxy.$Proxy11.rollEditLog(Unknown Source)
	at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:522)
	at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:405)
	at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$1.run(SecondaryNameNode.java:371)
	at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:476)
	at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:367)
	at java.lang.Thread.run(Thread.java:748)
==> /var/log/hadoop/hdfs/gc.log-201810251035 <==
2018-10-27T04:12:41.342+0200: 149836.581: [GC (Allocation Failure) 2018-10-27T04:12:41.342+0200: 149836.581: [ParNew: 210525K->605K(235968K), 0.0069253 secs] 244014K->34098K(2070976K), 0.0070102 secs] [Times: user=0.03 sys=0.01, real=0.00 secs] 
2018-10-27T05:50:41.987+0200: 155717.225: [GC (Allocation Failure) 2018-10-27T05:50:41.987+0200: 155717.225: [ParNew: 210397K->665K(235968K), 0.0053941 secs] 243890K->34160K(2070976K), 0.0054826 secs] [Times: user=0.04 sys=0.00, real=0.01 secs] 
2018-10-27T07:36:42.535+0200: 162077.773: [GC (Allocation Failure) 2018-10-27T07:36:42.535+0200: 162077.773: [ParNew: 210457K->684K(235968K), 0.0060738 secs] 243952K->34179K(2070976K), 0.0061472 secs] [Times: user=0.04 sys=0.01, real=0.01 secs] 
2018-10-27T09:38:42.526+0200: 169397.765: [GC (Allocation Failure) 2018-10-27T09:38:42.526+0200: 169397.765: [ParNew: 210476K->654K(235968K), 0.0062458 secs] 243971K->34179K(2070976K), 0.0063217 secs] [Times: user=0.03 sys=0.01, real=0.01 secs] 
2018-10-27T10:52:43.534+0200: 173838.772: [GC (Allocation Failure) 2018-10-27T10:52:43.534+0200: 173838.773: [ParNew: 210446K->449K(235968K), 0.0052087 secs] 243971K->34114K(2070976K), 0.0052878 secs] [Times: user=0.03 sys=0.00, real=0.00 secs] 
2018-10-27T12:52:44.148+0200: 181039.386: [GC (Allocation Failure) 2018-10-27T12:52:44.148+0200: 181039.386: [ParNew: 210241K->584K(235968K), 0.0064877 secs] 243906K->34249K(2070976K), 0.0065748 secs] [Times: user=0.03 sys=0.01, real=0.01 secs] 
2018-10-27T14:53:44.821+0200: 188300.060: [GC (Allocation Failure) 2018-10-27T14:53:44.822+0200: 188300.060: [ParNew: 210376K->765K(235968K), 0.0064116 secs] 244041K->34430K(2070976K), 0.0064888 secs] [Times: user=0.03 sys=0.01, real=0.00 secs] 
2018-10-27T16:31:45.355+0200: 194180.593: [GC (Allocation Failure) 2018-10-27T16:31:45.355+0200: 194180.593: [ParNew: 210557K->594K(235968K), 0.0061077 secs] 244222K->34324K(2070976K), 0.0062066 secs] [Times: user=0.03 sys=0.01, real=0.01 secs] 
2018-10-27T18:07:45.924+0200: 199941.163: [GC (Allocation Failure) 2018-10-27T18:07:45.924+0200: 199941.163: [ParNew: 210386K->670K(235968K), 0.0047476 secs] 244116K->34409K(2070976K), 0.0048352 secs] [Times: user=0.03 sys=0.00, real=0.00 secs] 
2018-10-27T19:46:42.537+0200: 205877.775: [GC (Allocation Failure) 2018-10-27T19:46:42.537+0200: 205877.775: [ParNew: 210462K->707K(235968K), 0.0071229 secs] 244201K->34447K(2070976K), 0.0072222 secs] [Times: user=0.05 sys=0.00, real=0.01 secs] 
2018-10-27T21:24:42.521+0200: 211757.759: [GC (Allocation Failure) 2018-10-27T21:24:42.521+0200: 211757.759: [ParNew: 210499K->542K(235968K), 0.0057302 secs] 244239K->34349K(2070976K), 0.0058097 secs] [Times: user=0.02 sys=0.01, real=0.01 secs] 
2018-10-27T22:38:47.452+0200: 216202.690: [GC (Allocation Failure) 2018-10-27T22:38:47.452+0200: 216202.690: [ParNew: 210334K->617K(235968K), 0.0067979 secs] 244141K->34424K(2070976K), 0.0068865 secs] [Times: user=0.04 sys=0.01, real=0.00 secs] 
2018-10-28T00:39:42.520+0200: 223457.759: [GC (Allocation Failure) 2018-10-28T00:39:42.520+0200: 223457.759: [ParNew: 210409K->700K(235968K), 0.0068970 secs] 244216K->34507K(2070976K), 0.0069853 secs] [Times: user=0.04 sys=0.00, real=0.01 secs] 
2018-10-28T02:41:58.556+0200: 230793.794: [GC (Allocation Failure) 2018-10-28T02:41:58.556+0200: 230793.794: [ParNew: 210492K->755K(235968K), 0.0065499 secs] 244299K->34564K(2070976K), 0.0066532 secs] [Times: user=0.04 sys=0.00, real=0.01 secs] 
2018-10-28T04:36:49.325+0200: 237684.564: [GC (Allocation Failure) 2018-10-28T04:36:49.325+0200: 237684.564: [ParNew: 210547K->759K(235968K), 0.0048834 secs] 244356K->34634K(2070976K), 0.0049761 secs] [Times: user=0.03 sys=0.00, real=0.00 secs] 
2018-10-28T06:36:50.000+0200: 244885.238: [GC (Allocation Failure) 2018-10-28T06:36:50.000+0200: 244885.238: [ParNew: 210551K->713K(235968K), 0.0070871 secs] 244426K->34589K(2070976K), 0.0071777 secs] [Times: user=0.05 sys=0.01, real=0.01 secs] 
2018-10-28T08:39:42.527+0200: 252257.765: [GC (Allocation Failure) 2018-10-28T08:39:42.527+0200: 252257.765: [ParNew: 210505K->751K(235968K), 0.0059436 secs] 244381K->34627K(2070976K), 0.0060241 secs] [Times: user=0.03 sys=0.00, real=0.01 secs] 
2018-10-28T10:36:51.347+0200: 259286.585: [GC (Allocation Failure) 2018-10-28T10:36:51.347+0200: 259286.585: [ParNew: 210543K->662K(235968K), 0.0059278 secs] 244419K->34601K(2070976K), 0.0060151 secs] [Times: user=0.03 sys=0.00, real=0.01 secs] 
2018-10-28T12:12:51.913+0200: 265047.151: [GC (Allocation Failure) 2018-10-28T12:12:51.913+0200: 265047.152: [ParNew: 210454K->648K(235968K), 0.0062106 secs] 244393K->34591K(2070976K), 0.0062920 secs] [Times: user=0.04 sys=0.00, real=0.00 secs] 
2018-10-28T14:14:52.551+0200: 272367.789: [GC (Allocation Failure) 2018-10-28T14:14:52.551+0200: 272367.789: [ParNew: 210440K->773K(235968K), 0.0048926 secs] 244383K->34716K(2070976K), 0.0049833 secs] [Times: user=0.03 sys=0.00, real=0.00 secs] 
2018-10-28T15:52:57.529+0200: 278252.768: [GC (Allocation Failure) 2018-10-28T15:52:57.529+0200: 278252.768: [ParNew: 210565K->483K(235968K), 0.0061652 secs] 244508K->34573K(2070976K), 0.0063145 secs] [Times: user=0.03 sys=0.01, real=0.01 secs] 
2018-10-28T17:31:53.681+0200: 284188.920: [GC (Allocation Failure) 2018-10-28T17:31:53.682+0200: 284188.920: [ParNew: 210275K->535K(235968K), 0.0051891 secs] 244365K->34636K(2070976K), 0.0052850 secs] [Times: user=0.03 sys=0.00, real=0.00 secs] 
2018-10-28T19:09:54.206+0200: 290069.445: [GC (Allocation Failure) 2018-10-28T19:09:54.206+0200: 290069.445: [ParNew: 210327K->560K(235968K), 0.0045909 secs] 244428K->34662K(2070976K), 0.0046698 secs] [Times: user=0.02 sys=0.00, real=0.01 secs] 
2018-10-28T21:10:54.829+0200: 297330.067: [GC (Allocation Failure) 2018-10-28T21:10:54.829+0200: 297330.067: [ParNew: 210352K->500K(235968K), 0.0055804 secs] 244454K->34602K(2070976K), 0.0056861 secs] [Times: user=0.04 sys=0.00, real=0.01 secs] 
2018-10-28T22:36:55.350+0200: 302490.589: [GC (Allocation Failure) 2018-10-28T22:36:55.350+0200: 302490.589: [ParNew: 210292K->532K(235968K), 0.0048859 secs] 244394K->34675K(2070976K), 0.0049635 secs] [Times: user=0.03 sys=0.00, real=0.01 secs] 
2018-10-29T00:21:55.991+0200: 308791.229: [GC (Allocation Failure) 2018-10-29T00:21:55.991+0200: 308791.229: [ParNew: 210324K->589K(235968K), 0.0054767 secs] 244467K->34731K(2070976K), 0.0055682 secs] [Times: user=0.03 sys=0.01, real=0.00 secs] 
2018-10-29T02:23:56.643+0200: 316111.881: [GC (Allocation Failure) 2018-10-29T02:23:56.643+0200: 316111.881: [ParNew: 210381K->585K(235968K), 0.0061674 secs] 244523K->34728K(2070976K), 0.0062612 secs] [Times: user=0.03 sys=0.01, real=0.00 secs] 
2018-10-29T04:07:42.512+0200: 322337.750: [GC (Allocation Failure) 2018-10-29T04:07:42.512+0200: 322337.750: [ParNew: 210377K->503K(235968K), 0.0042246 secs] 244520K->34707K(2070976K), 0.0042889 secs] [Times: user=0.03 sys=0.00, real=0.00 secs] 
2018-10-29T05:27:57.787+0200: 327153.025: [GC (Allocation Failure) 2018-10-29T05:27:57.787+0200: 327153.025: [ParNew: 210295K->495K(235968K), 0.0061271 secs] 244499K->34700K(2070976K), 0.0062285 secs] [Times: user=0.03 sys=0.02, real=0.01 secs] 
2018-10-29T07:32:58.509+0200: 334653.748: [GC (Allocation Failure) 2018-10-29T07:32:58.509+0200: 334653.748: [ParNew: 210287K->625K(235968K), 0.0061059 secs] 244492K->34829K(2070976K), 0.0061912 secs] [Times: user=0.03 sys=0.01, real=0.01 secs] 
2018-10-29T09:09:59.034+0200: 340474.272: [GC (Allocation Failure) 2018-10-29T09:09:59.034+0200: 340474.272: [ParNew: 210417K->691K(235968K), 0.0064946 secs] 244621K->34917K(2070976K), 0.0066242 secs] [Times: user=0.04 sys=0.00, real=0.00 secs] 
2018-10-29T10:45:59.610+0200: 346234.849: [GC (Allocation Failure) 2018-10-29T10:45:59.610+0200: 346234.849: [ParNew: 210483K->598K(235968K), 0.0061810 secs] 244709K->34868K(2070976K), 0.0062636 secs] [Times: user=0.03 sys=0.01, real=0.01 secs] 
Heap
 par new generation   total 235968K, used 145304K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
  eden space 209792K,  68% used [0x0000000080000000, 0x0000000088d50a40, 0x000000008cce0000)
  from space 26176K,   2% used [0x000000008cce0000, 0x000000008cd75858, 0x000000008e670000)
  to   space 26176K,   0% used [0x000000008e670000, 0x000000008e670000, 0x0000000090000000)
 concurrent mark-sweep generation total 1835008K, used 34270K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 26848K, capacity 27070K, committed 27404K, reserved 1073152K
  class space    used 2769K, capacity 2847K, committed 2864K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291212 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct  9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184492108k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
Heap
 par new generation   total 235968K, used 92325K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
  eden space 209792K,  44% used [0x0000000080000000, 0x0000000085a294a8, 0x000000008cce0000)
  from space 26176K,   0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
  to   space 26176K,   0% used [0x000000008e670000, 0x000000008e670000, 0x0000000090000000)
 concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 10752K, capacity 10886K, committed 11008K, reserved 1058816K
  class space    used 1146K, capacity 1221K, committed 1280K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291216 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct  9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184465952k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-10-29T12:16:35.710+0200: 1.255: [GC (Allocation Failure) 2018-10-29T12:16:35.710+0200: 1.255: [ParNew: 209792K->14139K(235968K), 0.0133716 secs] 209792K->14139K(2070976K), 0.0140281 secs] [Times: user=0.04 sys=0.02, real=0.02 secs] 
Heap
 par new generation   total 235968K, used 96404K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
  eden space 209792K,  39% used [0x0000000080000000, 0x00000000850565c8, 0x000000008cce0000)
  from space 26176K,  54% used [0x000000008e670000, 0x000000008f43ed68, 0x0000000090000000)
  to   space 26176K,   0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
 concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 21417K, capacity 21686K, committed 21960K, reserved 1069056K
  class space    used 2440K, capacity 2553K, committed 2560K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201810291332 <==
OpenJDK 64-Bit Server VM (25.191-b12) for linux-amd64 JRE (1.8.0_191-b12), built on Oct  9 2018 08:21:41 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
Memory: 4k page, physical 197551308k(184378432k free), swap 16777212k(16777212k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:MaxNewSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=268435456 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-10-29T13:32:34.683+0200: 1.251: [GC (Allocation Failure) 2018-10-29T13:32:34.683+0200: 1.252: [ParNew: 209792K->14143K(235968K), 0.0138683 secs] 209792K->14143K(2070976K), 0.0145293 secs] [Times: user=0.05 sys=0.01, real=0.01 secs] 
Heap
 par new generation   total 235968K, used 96408K [0x0000000080000000, 0x0000000090000000, 0x0000000090000000)
  eden space 209792K,  39% used [0x0000000080000000, 0x0000000085056510, 0x000000008cce0000)
  from space 26176K,  54% used [0x000000008e670000, 0x000000008f43fd30, 0x0000000090000000)
  to   space 26176K,   0% used [0x000000008cce0000, 0x000000008cce0000, 0x000000008e670000)
 concurrent mark-sweep generation total 1835008K, used 0K [0x0000000090000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 21424K, capacity 21686K, committed 21960K, reserved 1069056K
  class space    used 2436K, capacity 2553K, committed 2560K, reserved 1048576K

Command failed after 1 tries
0 REPLIES 0