Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

error start namenode

error start namenode

New Contributor
how to fix it? it seems bind port 50070 problem, but I checked, it's not used.

---------------------------
stderr: 
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 361, in <module>
    NameNode().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 375, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 99, in start
    upgrade_suspended=params.upgrade_suspended, env=env)
  File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 175, in namenode
    create_log_dir=True
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py", line 276, in service
    Execute(daemon_cmd, not_if=process_id_exists_command, environment=hadoop_env_exports)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 166, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 262, in action_run
    tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
    tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
    raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/2.6.4.0-91/hadoop/sbin/hadoop-daemon.sh --config /usr/hdp/2.6.4.0-91/hadoop/conf start namenode'' returned 1. starting namenode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-namenode-kvm-014239.novalocal.out
 stdout:
2018-01-29 14:30:07,349 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.4.0-91 -> 2.6.4.0-91
2018-01-29 14:30:07,396 - Using hadoop conf dir: /usr/hdp/2.6.4.0-91/hadoop/conf
2018-01-29 14:30:07,574 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.4.0-91 -> 2.6.4.0-91
2018-01-29 14:30:07,575 - Using hadoop conf dir: /usr/hdp/2.6.4.0-91/hadoop/conf
2018-01-29 14:30:07,576 - Group['hdfs'] {}
2018-01-29 14:30:07,578 - Group['hadoop'] {}
2018-01-29 14:30:07,578 - Group['users'] {}
2018-01-29 14:30:07,579 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-01-29 14:30:07,582 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-01-29 14:30:07,583 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users'], 'uid': None}
2018-01-29 14:30:07,583 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None}
2018-01-29 14:30:07,584 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-01-29 14:30:07,587 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2018-01-29 14:30:07,601 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2018-01-29 14:30:07,602 - Group['hdfs'] {}
2018-01-29 14:30:07,602 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hdfs']}
2018-01-29 14:30:07,603 - FS Type: 
2018-01-29 14:30:07,604 - Directory['/etc/hadoop'] {'mode': 0755}
2018-01-29 14:30:07,634 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2018-01-29 14:30:07,636 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2018-01-29 14:30:07,656 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2018-01-29 14:30:07,669 - Skipping Execute[('setenforce', '0')] due to not_if
2018-01-29 14:30:07,670 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2018-01-29 14:30:07,674 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2018-01-29 14:30:07,675 - Changing owner for /var/run/hadoop from 504 to root
2018-01-29 14:30:07,675 - Changing group for /var/run/hadoop from 502 to root
2018-01-29 14:30:07,675 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2018-01-29 14:30:07,682 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2018-01-29 14:30:07,684 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2018-01-29 14:30:07,690 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2018-01-29 14:30:07,703 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2018-01-29 14:30:07,704 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2018-01-29 14:30:07,705 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2018-01-29 14:30:07,710 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2018-01-29 14:30:07,717 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2018-01-29 14:30:08,152 - Using hadoop conf dir: /usr/hdp/2.6.4.0-91/hadoop/conf
2018-01-29 14:30:08,153 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.4.0-91 -> 2.6.4.0-91
2018-01-29 14:30:08,155 - Using hadoop conf dir: /usr/hdp/2.6.4.0-91/hadoop/conf
2018-01-29 14:30:08,162 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2018-01-29 14:30:08,172 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2018-01-29 14:30:08,173 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.4.0-91/hadoop/conf', 'configuration_attributes': {}, 'configurations': ...}
2018-01-29 14:30:08,189 - Generating config: /usr/hdp/2.6.4.0-91/hadoop/conf/hadoop-policy.xml
2018-01-29 14:30:08,189 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2018-01-29 14:30:08,204 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.4.0-91/hadoop/conf', 'configuration_attributes': {}, 'configurations': ...}
2018-01-29 14:30:08,218 - Generating config: /usr/hdp/2.6.4.0-91/hadoop/conf/ssl-client.xml
2018-01-29 14:30:08,219 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2018-01-29 14:30:08,229 - Directory['/usr/hdp/2.6.4.0-91/hadoop/conf/secure'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2018-01-29 14:30:08,231 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.4.0-91/hadoop/conf/secure', 'configuration_attributes': {}, 'configurations': ...}
2018-01-29 14:30:08,244 - Generating config: /usr/hdp/2.6.4.0-91/hadoop/conf/secure/ssl-client.xml
2018-01-29 14:30:08,245 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2018-01-29 14:30:08,255 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.4.0-91/hadoop/conf', 'configuration_attributes': {}, 'configurations': ...}
2018-01-29 14:30:08,270 - Generating config: /usr/hdp/2.6.4.0-91/hadoop/conf/ssl-server.xml
2018-01-29 14:30:08,271 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2018-01-29 14:30:08,282 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.4.0-91/hadoop/conf', 'configuration_attributes': {'final': {'dfs.support.append': 'true', 'dfs.datanode.data.dir': 'true', 'dfs.namenode.http-address': 'true', 'dfs.namenode.name.dir': 'true', 'dfs.webhdfs.enabled': 'true', 'dfs.datanode.failed.volumes.tolerated': 'true'}}, 'configurations': ...}
2018-01-29 14:30:08,296 - Generating config: /usr/hdp/2.6.4.0-91/hadoop/conf/hdfs-site.xml
2018-01-29 14:30:08,296 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2018-01-29 14:30:08,364 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.4.0-91/hadoop/conf', 'mode': 0644, 'configuration_attributes': {'final': {'fs.defaultFS': 'true'}}, 'owner': 'hdfs', 'configurations': ...}
2018-01-29 14:30:08,377 - Generating config: /usr/hdp/2.6.4.0-91/hadoop/conf/core-site.xml
2018-01-29 14:30:08,378 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2018-01-29 14:30:08,402 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'hdfs'}
2018-01-29 14:30:08,403 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.4.0-91 -> 2.6.4.0-91
2018-01-29 14:30:08,405 - Directory['/hadoop/hdfs/namenode'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2018-01-29 14:30:08,406 - Skipping setting up secure ZNode ACL for HFDS as it's supported only for NameNode HA mode.
2018-01-29 14:30:08,407 - Called service start with upgrade_type: None
2018-01-29 14:30:08,407 - Ranger Hdfs plugin is not enabled
2018-01-29 14:30:08,410 - File['/etc/hadoop/conf/dfs.exclude'] {'owner': 'hdfs', 'content': Template('exclude_hosts_list.j2'), 'group': 'hadoop'}
2018-01-29 14:30:08,410 - /hadoop/hdfs/namenode/namenode-formatted/ exists. Namenode DFS already formatted
2018-01-29 14:30:08,411 - Directory['/hadoop/hdfs/namenode/namenode-formatted/'] {'create_parents': True}
2018-01-29 14:30:08,411 - Options for start command are: 
2018-01-29 14:30:08,411 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755}
2018-01-29 14:30:08,412 - Changing owner for /var/run/hadoop from 0 to hdfs
2018-01-29 14:30:08,412 - Changing group for /var/run/hadoop from 0 to hadoop
2018-01-29 14:30:08,412 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2018-01-29 14:30:08,413 - Directory['/var/log/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2018-01-29 14:30:08,413 - File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'] {'action': ['delete'], 'not_if': 'ambari-sudo.sh  -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh  -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'}
2018-01-29 14:30:08,432 - Deleting File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid']
2018-01-29 14:30:08,433 - Execute['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/2.6.4.0-91/hadoop/sbin/hadoop-daemon.sh --config /usr/hdp/2.6.4.0-91/hadoop/conf start namenode''] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/2.6.4.0-91/hadoop/libexec'}, 'not_if': 'ambari-sudo.sh  -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh  -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'}
2018-01-29 14:30:12,601 - Execute['find /var/log/hadoop/hdfs -maxdepth 1 -type f -name '*' -exec echo '==> {} <==' \; -exec tail -n 40 {} \;'] {'logoutput': True, 'ignore_failures': True, 'user': 'hdfs'}
==> /var/log/hadoop/hdfs/gc.log-201801291115 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 8062380k(165188k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-01-29T11:15:59.281+0800: 3.297: [GC (Allocation Failure) 2018-01-29T11:15:59.281+0800: 3.297: [ParNew: 104960K->13055K(118016K), 0.2847550 secs] 104960K->17829K(1035520K), 0.2849881 secs] [Times: user=0.11 sys=0.36, real=0.29 secs] 
Heap
 par new generation   total 118016K, used 57581K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
  eden space 104960K,  42% used [0x00000000c0000000, 0x00000000c2b7b5c8, 0x00000000c6680000)
  from space 13056K,  99% used [0x00000000c7340000, 0x00000000c7fffff8, 0x00000000c8000000)
  to   space 13056K,   0% used [0x00000000c6680000, 0x00000000c6680000, 0x00000000c7340000)
 concurrent mark-sweep generation total 917504K, used 4773K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 17025K, capacity 17242K, committed 17408K, reserved 1064960K
  class space    used 1972K, capacity 2033K, committed 2048K, reserved 1048576K
7 secs] [Times: user=1.75 sys=0.02, real=5.20 secs] 
2018-01-29T11:16:00.138+0800: 15.422: [GC (CMS Final Remark) [YG occupancy: 146671 K (184320 K)]2018-01-29T11:16:00.138+0800: 15.423: [Rescan (parallel) , 0.1019447 secs]2018-01-29T11:16:00.240+0800: 15.525: [weak refs processing, 0.0000402 secs]2018-01-29T11:16:00.240+0800: 15.525: [class unloading, 0.0291535 secs]2018-01-29T11:16:00.269+0800: 15.554: [scrub symbol table, 0.0067833 secs]2018-01-29T11:16:00.276+0800: 15.561: [scrub string table, 0.0007261 secs][1 CMS-remark: 0K(843776K)] 146671K(1028096K), 0.1460668 secs] [Times: user=0.10 sys=0.00, real=0.15 secs] 
2018-01-29T11:16:00.286+0800: 15.571: [CMS-concurrent-sweep-start]
2018-01-29T11:16:00.286+0800: 15.571: [CMS-concurrent-sweep: 0.000/0.000 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
2018-01-29T11:16:00.286+0800: 15.571: [CMS-concurrent-reset-start]
2018-01-29T11:16:00.310+0800: 15.595: [CMS-concurrent-reset: 0.024/0.024 secs] [Times: user=0.00 sys=0.01, real=0.02 secs] 
2018-01-29T11:18:35.336+0800: 170.620: [GC (Allocation Failure) 2018-01-29T11:18:35.337+0800: 170.621: [ParNew: 176287K->13311K(184320K), 0.1058691 secs] 176287K->17192K(1028096K), 0.1065178 secs] [Times: user=0.12 sys=0.04, real=0.10 secs] 
Heap
 par new generation   total 184320K, used 65205K [0x00000000c0000000, 0x00000000cc800000, 0x00000000cc800000)
  eden space 163840K,  31% used [0x00000000c0000000, 0x00000000c32ad818, 0x00000000ca000000)
  from space 20480K,  64% used [0x00000000ca000000, 0x00000000cacffcf0, 0x00000000cb400000)
  to   space 20480K,   0% used [0x00000000cb400000, 0x00000000cb400000, 0x00000000cc800000)
 concurrent mark-sweep generation total 843776K, used 3881K [0x00000000cc800000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 28467K, capacity 28790K, committed 29308K, reserved 1075200K
  class space    used 3481K, capacity 3590K, committed 3708K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-kvm-014239.novalocal.out.2 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals                 (-i) 31381
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201801291344 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 8062380k(625588k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-01-29T13:45:00.370+0800: 2.063: [GC (Allocation Failure) 2018-01-29T13:45:00.371+0800: 2.064: [ParNew: 104960K->9520K(118016K), 0.0819412 secs] 104960K->9520K(1035520K), 0.0824180 secs] [Times: user=0.13 sys=0.02, real=0.09 secs] 
Heap
 par new generation   total 118016K, used 78251K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
  eden space 104960K,  65% used [0x00000000c0000000, 0x00000000c431e928, 0x00000000c6680000)
  from space 13056K,  72% used [0x00000000c7340000, 0x00000000c7c8c358, 0x00000000c8000000)
  to   space 13056K,   0% used [0x00000000c6680000, 0x00000000c6680000, 0x00000000c7340000)
 concurrent mark-sweep generation total 917504K, used 0K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 18309K, capacity 18612K, committed 18816K, reserved 1064960K
  class space    used 2247K, capacity 2360K, committed 2432K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201801291430 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 8062336k(5536632k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-01-29T14:30:10.411+0800: 1.779: [GC (Allocation Failure) 2018-01-29T14:30:10.411+0800: 1.779: [ParNew: 104960K->9494K(118016K), 0.0499380 secs] 104960K->9494K(1035520K), 0.0501729 secs] [Times: user=0.07 sys=0.01, real=0.05 secs] 
Heap
 par new generation   total 118016K, used 78234K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
  eden space 104960K,  65% used [0x00000000c0000000, 0x00000000c4321080, 0x00000000c6680000)
  from space 13056K,  72% used [0x00000000c7340000, 0x00000000c7c85a90, 0x00000000c8000000)
  to   space 13056K,   0% used [0x00000000c6680000, 0x00000000c6680000, 0x00000000c7340000)
 concurrent mark-sweep generation total 917504K, used 0K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 18286K, capacity 18612K, committed 18816K, reserved 1064960K
  class space    used 2247K, capacity 2360K, committed 2432K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201801291128 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 8062380k(481436k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-01-29T11:28:38.551+0800: 1.880: [GC (Allocation Failure) 2018-01-29T11:28:38.551+0800: 1.880: [ParNew: 104960K->9541K(118016K), 0.0580140 secs] 104960K->9541K(1035520K), 0.0581977 secs] [Times: user=0.04 sys=0.06, real=0.06 secs] 
Heap
 par new generation   total 118016K, used 78269K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
  eden space 104960K,  65% used [0x00000000c0000000, 0x00000000c431e160, 0x00000000c6680000)
  from space 13056K,  73% used [0x00000000c7340000, 0x00000000c7c91570, 0x00000000c8000000)
  to   space 13056K,   0% used [0x00000000c6680000, 0x00000000c6680000, 0x00000000c7340000)
 concurrent mark-sweep generation total 917504K, used 0K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 18261K, capacity 18612K, committed 18816K, reserved 1064960K
  class space    used 2247K, capacity 2360K, committed 2432K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-kvm-014239.novalocal.out.2 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals                 (-i) 31369
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201801291321 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 8062380k(676340k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-01-29T13:21:50.052+0800: 2.097: [GC (Allocation Failure) 2018-01-29T13:21:50.052+0800: 2.097: [ParNew: 104960K->9534K(118016K), 0.0571033 secs] 104960K->9534K(1035520K), 0.0573264 secs] [Times: user=0.05 sys=0.03, real=0.05 secs] 
Heap
 par new generation   total 118016K, used 78267K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
  eden space 104960K,  65% used [0x00000000c0000000, 0x00000000c431f630, 0x00000000c6680000)
  from space 13056K,  73% used [0x00000000c7340000, 0x00000000c7c8f900, 0x00000000c8000000)
  to   space 13056K,   0% used [0x00000000c6680000, 0x00000000c6680000, 0x00000000c7340000)
 concurrent mark-sweep generation total 917504K, used 0K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 18275K, capacity 18612K, committed 18816K, reserved 1064960K
  class space    used 2247K, capacity 2360K, committed 2432K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201801291404 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 8062336k(5648456k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-01-29T14:04:55.964+0800: 2.218: [GC (Allocation Failure) 2018-01-29T14:04:55.964+0800: 2.218: [ParNew: 104960K->9500K(118016K), 0.0714236 secs] 104960K->9500K(1035520K), 0.0717629 secs] [Times: user=0.05 sys=0.01, real=0.07 secs] 
Heap
 par new generation   total 118016K, used 77686K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
  eden space 104960K,  64% used [0x00000000c0000000, 0x00000000c42967d8, 0x00000000c6680000)
  from space 13056K,  72% used [0x00000000c7340000, 0x00000000c7c87268, 0x00000000c8000000)
  to   space 13056K,   0% used [0x00000000c6680000, 0x00000000c6680000, 0x00000000c7340000)
 concurrent mark-sweep generation total 917504K, used 0K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 18305K, capacity 18612K, committed 18816K, reserved 1064960K
  class space    used 2245K, capacity 2360K, committed 2432K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-kvm-014239.novalocal.log <==
2018-01-29 14:29:29,507 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 31 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:29:30,511 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 32 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:29:31,515 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 33 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:29:32,517 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 34 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:29:33,520 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 35 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:29:34,523 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 36 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:29:35,525 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 37 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:29:36,527 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 38 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:29:37,531 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 39 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:29:38,533 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 40 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:29:39,535 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 41 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:29:40,537 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 42 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:29:41,539 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 43 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:29:42,541 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 44 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:29:43,545 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 45 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:29:44,550 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 46 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:29:45,554 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 47 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:29:46,557 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 48 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:29:47,559 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 49 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:29:47,564 WARN  datanode.DataNode (BPServiceActor.java:retrieveNamespaceInfo(227)) - Problem connecting to server: kvm-014239.novalocal/9.111.139.69:8020
2018-01-29 14:29:53,568 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:29:54,573 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:29:55,577 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:29:56,579 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:29:57,581 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:29:58,588 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:29:59,590 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:30:00,592 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:30:01,599 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:30:02,601 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:30:03,602 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 10 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:30:04,604 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 11 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:30:05,607 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 12 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:30:06,609 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 13 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:30:07,614 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 14 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:30:08,618 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 15 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:30:09,622 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 16 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:30:10,624 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 17 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:30:11,627 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 18 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2018-01-29 14:30:12,629 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: kvm-014239.novalocal/9.111.139.69:8020. Already tried 19 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
==> /var/log/hadoop/hdfs/SecurityAuth.audit <==
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-kvm-014239.novalocal.out.3 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals                 (-i) 31369
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-kvm-014239.novalocal.out.3 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals                 (-i) 31369
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-kvm-014239.novalocal.out.5 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals                 (-i) 31369
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201801291116 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 8062380k(327804k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-01-29T11:16:05.437+0800: 4.131: [GC (Allocation Failure) 2018-01-29T11:16:05.437+0800: 4.131: [ParNew: 104960K->9597K(118016K), 0.2484504 secs] 104960K->9597K(1035520K), 0.2488424 secs] [Times: user=0.19 sys=0.20, real=0.25 secs] 
Heap
 par new generation   total 118016K, used 77768K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
  eden space 104960K,  64% used [0x00000000c0000000, 0x00000000c4292d20, 0x00000000c6680000)
  from space 13056K,  73% used [0x00000000c7340000, 0x00000000c7c9f580, 0x00000000c8000000)
  to   space 13056K,   0% used [0x00000000c6680000, 0x00000000c6680000, 0x00000000c7340000)
 concurrent mark-sweep generation total 917504K, used 0K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 18337K, capacity 18676K, committed 18816K, reserved 1064960K
  class space    used 2247K, capacity 2360K, committed 2432K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-kvm-014239.novalocal.out <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals                 (-i) 31381
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-kvm-014239.novalocal.out.4 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals                 (-i) 31369
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-kvm-014239.novalocal.out <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals                 (-i) 31381
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/hdfs-audit.log <==
==> /var/log/hadoop/hdfs/gc.log-201801291130 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 8062380k(469476k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-01-29T11:31:00.881+0800: 2.255: [GC (Allocation Failure) 2018-01-29T11:31:00.881+0800: 2.255: [ParNew: 104960K->9528K(118016K), 0.1698101 secs] 104960K->9528K(1035520K), 0.1699449 secs] [Times: user=0.19 sys=0.09, real=0.17 secs] 
Heap
 par new generation   total 118016K, used 78258K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
  eden space 104960K,  65% used [0x00000000c0000000, 0x00000000c431e838, 0x00000000c6680000)
  from space 13056K,  72% used [0x00000000c7340000, 0x00000000c7c8e1a0, 0x00000000c8000000)
  to   space 13056K,   0% used [0x00000000c6680000, 0x00000000c6680000, 0x00000000c7340000)
 concurrent mark-sweep generation total 917504K, used 0K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 18320K, capacity 18676K, committed 18816K, reserved 1064960K
  class space    used 2247K, capacity 2360K, committed 2432K, reserved 1048576K
.648+0800: 9.717: [Rescan (parallel) , 0.0396598 secs]2018-01-29T11:31:00.688+0800: 9.756: [weak refs processing, 0.0000403 secs]2018-01-29T11:31:00.688+0800: 9.756: [class unloading, 0.0145710 secs]2018-01-29T11:31:00.703+0800: 9.771: [scrub symbol table, 0.0064015 secs]2018-01-29T11:31:00.709+0800: 9.777: [scrub string table, 0.0006191 secs][1 CMS-remark: 0K(843776K)] 146700K(1028096K), 0.0621982 secs] [Times: user=0.08 sys=0.00, real=0.07 secs] 
2018-01-29T11:31:00.711+0800: 9.779: [CMS-concurrent-sweep-start]
2018-01-29T11:31:00.711+0800: 9.779: [CMS-concurrent-sweep: 0.000/0.000 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
2018-01-29T11:31:00.711+0800: 9.779: [CMS-concurrent-reset-start]
2018-01-29T11:31:00.723+0800: 9.791: [CMS-concurrent-reset: 0.012/0.012 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] 
Heap
 par new generation   total 184320K, used 176282K [0x00000000c0000000, 0x00000000cc800000, 0x00000000cc800000)
  eden space 163840K, 100% used [0x00000000c0000000, 0x00000000ca000000, 0x00000000ca000000)
  from space 20480K,  60% used [0x00000000cb400000, 0x00000000cc026a08, 0x00000000cc800000)
  to   space 20480K,   0% used [0x00000000ca000000, 0x00000000ca000000, 0x00000000cb400000)
 concurrent mark-sweep generation total 843776K, used 0K [0x00000000cc800000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 27921K, capacity 28312K, committed 28732K, reserved 1075200K
  class space    used 3477K, capacity 3585K, committed 3656K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201801291134 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 8062380k(741572k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=209715200 -XX:MaxTenuringThreshold=6 -XX:NewSize=209715200 -XX:OldPLABSize=16 -XX:ParallelGCThreads=4 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-01-29T11:34:40.321+0800: 2.489: [GC (Allocation Failure) 2018-01-29T11:34:40.321+0800: 2.489: [ParNew: 163840K->12445K(184320K), 0.0348014 secs] 163840K->12445K(1028096K), 0.0352060 secs] [Times: user=0.04 sys=0.01, real=0.04 secs] 
2018-01-29T11:34:42.360+0800: 4.528: [GC (CMS Initial Mark) [1 CMS-initial-mark: 0K(843776K)] 112805K(1028096K), 0.0245020 secs] [Times: user=0.04 sys=0.00, real=0.03 secs] 
2018-01-29T11:34:42.385+0800: 4.553: [CMS-concurrent-mark-start]
2018-01-29T11:34:42.405+0800: 4.573: [CMS-concurrent-mark: 0.020/0.020 secs] [Times: user=0.01 sys=0.02, real=0.02 secs] 
2018-01-29T11:34:42.405+0800: 4.573: [CMS-concurrent-preclean-start]
2018-01-29T11:34:42.409+0800: 4.577: [CMS-concurrent-preclean: 0.004/0.004 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] 
2018-01-29T11:34:42.409+0800: 4.577: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2018-01-29T11:34:47.486+0800: 9.654: [CMS-concurrent-abortable-preclean: 1.783/5.077 secs] [Times: user=3.07 sys=0.09, real=5.08 secs] 
2018-01-29T11:34:47.487+0800: 9.655: [GC (CMS Final Remark) [YG occupancy: 150539 K (184320 K)]2018-01-29T11:34:47.488+0800: 9.655: [Rescan (parallel) , 0.0329629 secs]2018-01-29T11:34:47.521+0800: 9.688: [weak refs processing, 0.0000481 secs]2018-01-29T11:34:47.521+0800: 9.688: [class unloading, 0.0181242 secs]2018-01-29T11:34:47.539+0800: 9.707: [scrub symbol table, 0.0054449 secs]2018-01-29T11:34:47.544+0800: 9.712: [scrub string table, 0.0007044 secs][1 CMS-remark: 0K(843776K)] 150539K(1028096K), 0.0582438 secs] [Times: user=0.09 sys=0.00, real=0.06 secs] 
2018-01-29T11:34:47.546+0800: 9.714: [CMS-concurrent-sweep-start]
2018-01-29T11:34:47.547+0800: 9.714: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
2018-01-29T11:34:47.547+0800: 9.714: [CMS-concurrent-reset-start]
2018-01-29T11:34:47.560+0800: 9.727: [CMS-concurrent-reset: 0.013/0.013 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] 
2018-01-29T11:37:02.614+0800: 144.782: [GC (Allocation Failure) 2018-01-29T11:37:02.615+0800: 144.782: [ParNew: 176285K->15007K(184320K), 0.1408719 secs] 176285K->18938K(1028096K), 0.1410609 secs] [Times: user=0.11 sys=0.01, real=0.14 secs] 
2018-01-29T12:27:11.214+0800: 3153.382: [GC (Allocation Failure) 2018-01-29T12:27:11.215+0800: 3153.382: [ParNew: 178847K->4558K(184320K), 0.0394061 secs] 182778K->12903K(1028096K), 0.0398636 secs] [Times: user=0.06 sys=0.01, real=0.04 secs] 
2018-01-29T13:21:01.534+0800: 6383.702: [GC (Allocation Failure) 2018-01-29T13:21:01.535+0800: 6383.702: [ParNew: 168398K->1727K(184320K), 0.0209188 secs] 176743K->10072K(1028096K), 0.0211587 secs] [Times: user=0.04 sys=0.00, real=0.02 secs] 
Heap
 par new generation   total 184320K, used 2295K [0x00000000c0000000, 0x00000000cc800000, 0x00000000cc800000)
  eden space 163840K,   0% used [0x00000000c0000000, 0x00000000c008df40, 0x00000000ca000000)
  from space 20480K,   8% used [0x00000000ca000000, 0x00000000ca1afe08, 0x00000000cb400000)
  to   space 20480K,   0% used [0x00000000cb400000, 0x00000000cb400000, 0x00000000cc800000)
 concurrent mark-sweep generation total 843776K, used 8344K [0x00000000cc800000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 29429K, capacity 29702K, committed 30076K, reserved 1075200K
  class space    used 3484K, capacity 3592K, committed 3708K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201801291136 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 8062380k(433568k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-01-29T11:36:19.447+0800: 2.439: [GC (Allocation Failure) 2018-01-29T11:36:19.447+0800: 2.439: [ParNew: 104960K->9524K(118016K), 0.1769908 secs] 104960K->9524K(1035520K), 0.1771496 secs] [Times: user=0.28 sys=0.05, real=0.17 secs] 
Heap
 par new generation   total 118016K, used 78256K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
  eden space 104960K,  65% used [0x00000000c0000000, 0x00000000c431ed18, 0x00000000c6680000)
  from space 13056K,  72% used [0x00000000c7340000, 0x00000000c7c8d2f8, 0x00000000c8000000)
  to   space 13056K,   0% used [0x00000000c6680000, 0x00000000c6680000, 0x00000000c7340000)
 concurrent mark-sweep generation total 917504K, used 0K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 18290K, capacity 18612K, committed 18816K, reserved 1064960K
  class space    used 2247K, capacity 2360K, committed 2432K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201801291132 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 8062380k(758456k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-01-29T11:32:49.236+0800: 1.596: [GC (Allocation Failure) 2018-01-29T11:32:49.236+0800: 1.596: [ParNew: 104960K->9513K(118016K), 0.0527192 secs] 104960K->9513K(1035520K), 0.0531792 secs] [Times: user=0.06 sys=0.03, real=0.05 secs] 
Heap
 par new generation   total 118016K, used 78245K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
  eden space 104960K,  65% used [0x00000000c0000000, 0x00000000c431f108, 0x00000000c6680000)
  from space 13056K,  72% used [0x00000000c7340000, 0x00000000c7c8a500, 0x00000000c8000000)
  to   space 13056K,   0% used [0x00000000c6680000, 0x00000000c6680000, 0x00000000c7340000)
 concurrent mark-sweep generation total 917504K, used 0K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 18280K, capacity 18612K, committed 18816K, reserved 1064960K
  class space    used 2248K, capacity 2360K, committed 2432K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-kvm-014239.novalocal.out.1 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals                 (-i) 31381
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201801291332 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 8062380k(669112k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-01-29T13:32:56.259+0800: 2.369: [GC (Allocation Failure) 2018-01-29T13:32:56.260+0800: 2.371: [ParNew: 104960K->9507K(118016K), 0.0436062 secs] 104960K->9507K(1035520K), 0.0448394 secs] [Times: user=0.05 sys=0.03, real=0.05 secs] 
Heap
 par new generation   total 118016K, used 78241K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
  eden space 104960K,  65% used [0x00000000c0000000, 0x00000000c431f918, 0x00000000c6680000)
  from space 13056K,  72% used [0x00000000c7340000, 0x00000000c7c88c88, 0x00000000c8000000)
  to   space 13056K,   0% used [0x00000000c6680000, 0x00000000c6680000, 0x00000000c7340000)
 concurrent mark-sweep generation total 917504K, used 0K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 18306K, capacity 18612K, committed 18816K, reserved 1064960K
  class space    used 2247K, capacity 2360K, committed 2432K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-kvm-014239.novalocal.out.1 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals                 (-i) 31369
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201801291402 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 8062336k(6004960k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=209715200 -XX:MaxTenuringThreshold=6 -XX:NewSize=209715200 -XX:OldPLABSize=16 -XX:ParallelGCThreads=4 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-01-29T14:02:14.128+0800: 3.693: [GC (Allocation Failure) 2018-01-29T14:02:14.128+0800: 3.693: [ParNew: 163840K->12378K(184320K), 0.0524328 secs] 163840K->12378K(1028096K), 0.0527030 secs] [Times: user=0.05 sys=0.02, real=0.05 secs] 
2018-01-29T14:02:16.185+0800: 5.750: [GC (CMS Initial Mark) [1 CMS-initial-mark: 0K(843776K)] 120813K(1028096K), 0.0238452 secs] [Times: user=0.04 sys=0.00, real=0.02 secs] 
2018-01-29T14:02:16.209+0800: 5.774: [CMS-concurrent-mark-start]
2018-01-29T14:02:16.229+0800: 5.794: [CMS-concurrent-mark: 0.019/0.019 secs] [Times: user=0.03 sys=0.01, real=0.02 secs] 
2018-01-29T14:02:16.229+0800: 5.794: [CMS-concurrent-preclean-start]
2018-01-29T14:02:16.233+0800: 5.798: [CMS-concurrent-preclean: 0.004/0.004 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
2018-01-29T14:02:16.233+0800: 5.798: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2018-01-29T14:02:21.244+0800: 10.809: [CMS-concurrent-abortable-preclean: 1.758/5.011 secs] [Times: user=4.17 sys=0.16, real=5.01 secs] 
2018-01-29T14:02:21.245+0800: 10.810: [GC (CMS Final Remark) [YG occupancy: 153385 K (184320 K)]2018-01-29T14:02:21.246+0800: 10.811: [Rescan (parallel) , 0.1072587 secs]2018-01-29T14:02:21.353+0800: 10.918: [weak refs processing, 0.0000791 secs]2018-01-29T14:02:21.353+0800: 10.918: [class unloading, 0.0336268 secs]2018-01-29T14:02:21.387+0800: 10.952: [scrub symbol table, 0.0116678 secs]2018-01-29T14:02:21.398+0800: 10.963: [scrub string table, 0.0012692 secs][1 CMS-remark: 0K(843776K)] 153385K(1028096K), 0.1552595 secs] [Times: user=0.20 sys=0.01, real=0.15 secs] 
2018-01-29T14:02:21.401+0800: 10.966: [CMS-concurrent-sweep-start]
2018-01-29T14:02:21.401+0800: 10.966: [CMS-concurrent-sweep: 0.000/0.000 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
2018-01-29T14:02:21.401+0800: 10.966: [CMS-concurrent-reset-start]
2018-01-29T14:02:21.416+0800: 10.981: [CMS-concurrent-reset: 0.015/0.015 secs] [Times: user=0.01 sys=0.00, real=0.02 secs] 
2018-01-29T14:03:49.591+0800: 99.156: [GC (Allocation Failure) 2018-01-29T14:03:49.591+0800: 99.156: [ParNew: 176218K->14092K(184320K), 0.1538157 secs] 176218K->18032K(1028096K), 0.1541090 secs] [Times: user=0.13 sys=0.01, real=0.15 secs] 
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-kvm-014239.novalocal.log <==
    at sun.nio.ch.Net.bind(Net.java:433)
    at sun.nio.ch.Net.bind(Net.java:425)
    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
    at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
    at org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:988)
    at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1019)
    ... 9 more
2018-01-29 14:30:12,280 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(211)) - Stopping NameNode metrics system...
2018-01-29 14:30:12,281 INFO  impl.MetricsSinkAdapter (MetricsSinkAdapter.java:publishMetricsFromQueue(141)) - timeline thread interrupted.
2018-01-29 14:30:12,282 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(217)) - NameNode metrics system stopped.
2018-01-29 14:30:12,283 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(606)) - NameNode metrics system shutdown complete.
2018-01-29 14:30:12,283 ERROR namenode.NameNode (NameNode.java:main(1783)) - Failed to start namenode.
java.net.BindException: Port in use: kvm-014239.novalocal:50070
    at org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1001)
    at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1023)
    at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1080)
    at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:937)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:170)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:942)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:755)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:1001)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:985)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1710)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1778)
Caused by: java.net.BindException: Cannot assign requested address
    at sun.nio.ch.Net.bind0(Native Method)
    at sun.nio.ch.Net.bind(Net.java:433)
    at sun.nio.ch.Net.bind(Net.java:425)
    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
    at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
    at org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:988)
    at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1019)
    ... 9 more
2018-01-29 14:30:12,285 INFO  util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1
2018-01-29 14:30:12,288 INFO  namenode.NameNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at kvm-014239.novalocal/9.111.139.69
************************************************************/
==> /var/log/hadoop/hdfs/gc.log-201801291354 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 8062336k(6122444k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2018-01-29T13:54:59.956+0800: 12.524: [GC (Allocation Failure) 2018-01-29T13:54:59.956+0800: 12.524: [ParNew: 104960K->9511K(118016K), 0.0555548 secs] 104960K->9511K(1035520K), 0.0557999 secs] [Times: user=0.07 sys=0.02, real=0.06 secs] 
Heap
 par new generation   total 118016K, used 77696K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
  eden space 104960K,  64% used [0x00000000c0000000, 0x00000000c4296370, 0x00000000c6680000)
  from space 13056K,  72% used [0x00000000c7340000, 0x00000000c7c89e50, 0x00000000c8000000)
  to   space 13056K,   0% used [0x00000000c6680000, 0x00000000c6680000, 0x00000000c7340000)
 concurrent mark-sweep generation total 917504K, used 0K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 18323K, capacity 18676K, committed 18816K, reserved 1064960K
  class space    used 2245K, capacity 2360K, committed 2432K, reserved 1048576K
Command failed after 1 tries
</p>
3 REPLIES 3
Highlighted

Re: error start namenode

Super Mentor

@ultradawn Yan

As we see that the error is due to port conflict:

2018-01-29 14:30:12,283 ERROR namenode.NameNode (NameNode.java:main(1783)) - Failed to start namenode.java.net.BindException: Port in use: kvm-014239.novalocal:50070    at org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1001)

.

So it may be possible that the port might be used by some other process.

Can you please check if you see any output for the following command before starting the NameNode?

# netstat -tnlpa | grep 50070

.

Also please check if the Hostname is correctly mapped to the IP Address. The IP Address and the Hostname is mentioned in the above log snippet output. Please check if that mapping is correct?

.

Also if possible then please try changing the port from 50070 to something else like 59970 just to see if that makes the NN up and running. This is just to isolate the issue.

.

Highlighted

Re: error start namenode

New Contributor

I tried, but port 50070 is not used.

-----------------------------------------------

[root@kvm-014239 tmp]# netstat -tnlpa | grep 50070
[root@kvm-014239 tmp]#

Highlighted

Re: error start namenode

Cloudera Employee

@ultradawn Yan Can you please send the output of:

netstat -tulpn | grep LISTEN

lsof -i:50070

On the node on which the namenode is not starting up.

If there is any service running on this port, trying killing that process using kill -9, then try restarting the namenode.

Don't have an account?
Coming from Hortonworks? Activate your account here