Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

NameNode can not Start

NameNode can not Start

New Contributor

107214-screenshot-from-2019-03-15-10-08-09.png







stdout:   /var/lib/ambari-agent/data/output-387.txt
2019-03-15 06:50:05,225 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.2.0-205 -> 2.6.2.0-205
2019-03-15 06:50:05,242 - Using hadoop conf dir: /usr/hdp/2.6.2.0-205/hadoop/conf
2019-03-15 06:50:05,406 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.2.0-205 -> 2.6.2.0-205
2019-03-15 06:50:05,411 - Using hadoop conf dir: /usr/hdp/2.6.2.0-205/hadoop/conf
2019-03-15 06:50:05,412 - Group['hdfs'] {}
2019-03-15 06:50:05,413 - Group['hadoop'] {}
2019-03-15 06:50:05,413 - Group['users'] {}
2019-03-15 06:50:05,414 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2019-03-15 06:50:05,415 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2019-03-15 06:50:05,415 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2019-03-15 06:50:05,416 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None}
2019-03-15 06:50:05,417 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2019-03-15 06:50:05,417 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2019-03-15 06:50:05,418 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-03-15 06:50:05,419 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2019-03-15 06:50:05,427 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2019-03-15 06:50:05,428 - Group['hdfs'] {}
2019-03-15 06:50:05,428 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', u'hdfs']}
2019-03-15 06:50:05,428 - FS Type: 
2019-03-15 06:50:05,428 - Directory['/etc/hadoop'] {'mode': 0755}
2019-03-15 06:50:05,446 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2019-03-15 06:50:05,447 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2019-03-15 06:50:05,464 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2019-03-15 06:50:05,478 - Skipping Execute[('setenforce', '0')] due to only_if
2019-03-15 06:50:05,478 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2019-03-15 06:50:05,482 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2019-03-15 06:50:05,483 - Changing owner for /var/run/hadoop from 1004 to root
2019-03-15 06:50:05,483 - Changing group for /var/run/hadoop from 1002 to root
2019-03-15 06:50:05,484 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2019-03-15 06:50:05,489 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2019-03-15 06:50:05,491 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2019-03-15 06:50:05,497 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2019-03-15 06:50:05,506 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2019-03-15 06:50:05,507 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2019-03-15 06:50:05,507 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2019-03-15 06:50:05,511 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2019-03-15 06:50:05,518 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2019-03-15 06:50:05,608 - call[('ambari-python-wrap', u'/usr/bin/hdp-select', 'versions')] {}
2019-03-15 06:50:05,634 - call returned (0, '2.6.2.0-205\n2.6.5.1050-37')
2019-03-15 06:50:05,911 - Using hadoop conf dir: /usr/hdp/2.6.2.0-205/hadoop/conf
2019-03-15 06:50:05,912 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.2.0-205 -> 2.6.2.0-205
2019-03-15 06:50:05,932 - Using hadoop conf dir: /usr/hdp/2.6.2.0-205/hadoop/conf
2019-03-15 06:50:05,947 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2019-03-15 06:50:05,951 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2019-03-15 06:50:05,952 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.2.0-205/hadoop/conf', 'configuration_attributes': {}, 'configurations': ...}
2019-03-15 06:50:05,962 - Generating config: /usr/hdp/2.6.2.0-205/hadoop/conf/hadoop-policy.xml
2019-03-15 06:50:05,962 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2019-03-15 06:50:05,971 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.2.0-205/hadoop/conf', 'configuration_attributes': {}, 'configurations': ...}
2019-03-15 06:50:05,978 - Generating config: /usr/hdp/2.6.2.0-205/hadoop/conf/ssl-client.xml
2019-03-15 06:50:05,978 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2019-03-15 06:50:05,984 - Directory['/usr/hdp/2.6.2.0-205/hadoop/conf/secure'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2019-03-15 06:50:05,985 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.2.0-205/hadoop/conf/secure', 'configuration_attributes': {}, 'configurations': ...}
2019-03-15 06:50:05,992 - Generating config: /usr/hdp/2.6.2.0-205/hadoop/conf/secure/ssl-client.xml
2019-03-15 06:50:05,992 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2019-03-15 06:50:05,998 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.2.0-205/hadoop/conf', 'configuration_attributes': {}, 'configurations': ...}
2019-03-15 06:50:06,005 - Generating config: /usr/hdp/2.6.2.0-205/hadoop/conf/ssl-server.xml
2019-03-15 06:50:06,006 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2019-03-15 06:50:06,012 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.2.0-205/hadoop/conf', 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true'}}, 'configurations': ...}
2019-03-15 06:50:06,021 - Generating config: /usr/hdp/2.6.2.0-205/hadoop/conf/hdfs-site.xml
2019-03-15 06:50:06,021 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2019-03-15 06:50:06,073 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.2.0-205/hadoop/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'hdfs', 'configurations': ...}
2019-03-15 06:50:06,081 - Generating config: /usr/hdp/2.6.2.0-205/hadoop/conf/core-site.xml
2019-03-15 06:50:06,081 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-03-15 06:50:06,102 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'hdfs'}
2019-03-15 06:50:06,103 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.2.0-205 -> 2.6.2.0-205
2019-03-15 06:50:06,109 - Directory['/hadoop/hdfs/namenode'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2019-03-15 06:50:06,109 - Skipping setting up secure ZNode ACL for HFDS as it's supported only for NameNode HA mode.
2019-03-15 06:50:06,112 - Called service start with upgrade_type: None
2019-03-15 06:50:06,112 - Ranger Hdfs plugin is not enabled
2019-03-15 06:50:06,114 - File['/etc/hadoop/conf/dfs.exclude'] {'owner': 'hdfs', 'content': Template('exclude_hosts_list.j2'), 'group': 'hadoop'}
2019-03-15 06:50:06,114 - /hadoop/hdfs/namenode/namenode-formatted/ exists. Namenode DFS already formatted
2019-03-15 06:50:06,115 - Directory['/hadoop/hdfs/namenode/namenode-formatted/'] {'create_parents': True}
2019-03-15 06:50:06,115 - Options for start command are: 
2019-03-15 06:50:06,115 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755}
2019-03-15 06:50:06,115 - Changing owner for /var/run/hadoop from 0 to hdfs
2019-03-15 06:50:06,116 - Changing group for /var/run/hadoop from 0 to hadoop
2019-03-15 06:50:06,116 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2019-03-15 06:50:06,116 - Directory['/var/log/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2019-03-15 06:50:06,117 - File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'] {'action': ['delete'], 'not_if': 'ambari-sudo.sh  -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh  -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'}
2019-03-15 06:50:06,137 - Deleting File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid']
2019-03-15 06:50:06,138 - Execute['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/2.6.2.0-205/hadoop/sbin/hadoop-daemon.sh --config /usr/hdp/2.6.2.0-205/hadoop/conf start namenode''] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/2.6.2.0-205/hadoop/libexec'}, 'not_if': 'ambari-sudo.sh  -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh  -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'}
2019-03-15 06:50:10,317 - Execute['find /var/log/hadoop/hdfs -maxdepth 1 -type f -name '*' -exec echo '==> {} <==' \; -exec tail -n 40 {} \;'] {'logoutput': True, 'ignore_failures': True, 'user': 'hdfs'}
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-sipnamenode.novalocal.out <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals                 (-i) 128569
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201903150628 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 32948312k(25488852k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2019-03-15T06:28:52.955+0000: 1.012: [GC (GCLocker Initiated GC) 2019-03-15T06:28:52.955+0000: 1.012: [ParNew: 104960K->9633K(118016K), 0.0159947 secs] 104960K->9633K(1035520K), 0.0161429 secs] [Times: user=0.06 sys=0.00, real=0.02 secs] 
Heap
 par new generation   total 118016K, used 76324K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
  eden space 104960K,  63% used [0x00000000c0000000, 0x00000000c4120a48, 0x00000000c6680000)
  from space 13056K,  73% used [0x00000000c7340000, 0x00000000c7ca86e0, 0x00000000c8000000)
  to   space 13056K,   0% used [0x00000000c6680000, 0x00000000c6680000, 0x00000000c7340000)
 concurrent mark-sweep generation total 917504K, used 0K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 18260K, capacity 18612K, committed 18816K, reserved 1064960K
  class space    used 2246K, capacity 2360K, committed 2432K, reserved 1048576K
2019-03-15T06:30:05.713+0000: 85.667: [GC (Allocation Failure) 2019-03-15T06:30:05.713+0000: 85.667: [ParNew: 176325K->16223K(184320K), 0.0422067 secs] 176325K->20146K(1028096K), 0.0423734 secs] [Times: user=0.09 sys=0.01, real=0.04 secs] 
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-sipnamenode.novalocal.log <==
    at org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink.putMetrics(HadoopTimelineMetricsSink.java:353)
    at org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:186)
    at org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:43)
    at org.apache.hadoop.metrics2.impl.SinkQueue.consumeAll(SinkQueue.java:87)
    at org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.publishMetricsFromQueue(MetricsSinkAdapter.java:134)
    at org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1.run(MetricsSinkAdapter.java:88)
2019-03-15 06:49:33,316 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 37 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:34,317 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 38 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:35,319 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 39 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:36,320 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 40 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:37,322 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 41 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:38,323 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 42 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:39,324 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 43 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:40,326 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 44 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:41,327 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 45 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:42,328 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 46 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:43,329 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 47 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:44,330 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 48 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:45,332 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 49 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:45,333 WARN  datanode.DataNode (BPServiceActor.java:retrieveNamespaceInfo(227)) - Problem connecting to server: sipnamenode.novalocal/10.0.35.134:8020
2019-03-15 06:49:51,335 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:52,336 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:53,338 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:54,339 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:55,340 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:56,342 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:57,343 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:58,344 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:59,346 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:50:00,347 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:50:01,348 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 10 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:50:02,350 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 11 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:50:03,351 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 12 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:50:04,352 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 13 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:50:05,354 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 14 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:50:06,355 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 15 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:50:07,356 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 16 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:50:08,358 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 17 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:50:09,359 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 18 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:50:10,360 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 19 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
==> /var/log/hadoop/hdfs/SecurityAuth.audit <==
==> /var/log/hadoop/hdfs/hdfs-audit.log <==
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-sipnamenode.novalocal.log <==
    at sun.nio.ch.Net.bind(Net.java:425)
    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
    at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
    at org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:988)
    at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1019)
    ... 9 more
2019-03-15 06:50:08,171 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(211)) - Stopping NameNode metrics system...
2019-03-15 06:50:08,172 INFO  impl.MetricsSinkAdapter (MetricsSinkAdapter.java:publishMetricsFromQueue(141)) - timeline thread interrupted.
2019-03-15 06:50:08,173 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(217)) - NameNode metrics system stopped.
2019-03-15 06:50:08,173 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(606)) - NameNode metrics system shutdown complete.
2019-03-15 06:50:08,173 ERROR namenode.NameNode (NameNode.java:main(1774)) - Failed to start namenode.
java.net.BindException: Port in use: sipnamenode.novalocal:50070
    at org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1000)
    at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1023)
    at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1080)
    at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:937)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:170)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:933)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:746)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:992)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:976)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1701)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1769)
Caused by: java.net.BindException: Cannot assign requested address
    at sun.nio.ch.Net.bind0(Native Method)
    at sun.nio.ch.Net.bind(Net.java:433)
    at sun.nio.ch.Net.bind(Net.java:425)
    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
    at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
    at org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:988)
    at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1019)
    ... 9 more
2019-03-15 06:50:08,175 INFO  util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1
2019-03-15 06:50:08,176 INFO  timeline.HadoopTimelineMetricsSink (AbstractTimelineMetricsSink.java:getCurrentCollectorHost(278)) - No live collector to send metrics to. Metrics to be sent will be discarded. This message will be skipped for the next 20 times.
2019-03-15 06:50:08,177 INFO  namenode.NameNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at sipnamenode.novalocal/10.0.35.134
************************************************************/
==> /var/log/hadoop/hdfs/gc.log-201903150635 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 32948312k(25422016k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2019-03-15T06:35:56.264+0000: 1.064: [GC (Allocation Failure) 2019-03-15T06:35:56.264+0000: 1.064: [ParNew: 104960K->9551K(118016K), 0.0271395 secs] 104960K->9551K(1035520K), 0.0273200 secs] [Times: user=0.09 sys=0.00, real=0.03 secs] 
Heap
 par new generation   total 118016K, used 77297K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
  eden space 104960K,  64% used [0x00000000c0000000, 0x00000000c4228530, 0x00000000c6680000)
  from space 13056K,  73% used [0x00000000c7340000, 0x00000000c7c93f60, 0x00000000c8000000)
  to   space 13056K,   0% used [0x00000000c6680000, 0x00000000c6680000, 0x00000000c7340000)
 concurrent mark-sweep generation total 917504K, used 0K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 18279K, capacity 18612K, committed 18816K, reserved 1064960K
  class space    used 2246K, capacity 2360K, committed 2432K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201903150638 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 32948312k(25420080k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2019-03-15T06:38:51.156+0000: 1.088: [GC (Allocation Failure) 2019-03-15T06:38:51.156+0000: 1.088: [ParNew: 104960K->9549K(118016K), 0.0323991 secs] 104960K->9549K(1035520K), 0.0325641 secs] [Times: user=0.10 sys=0.01, real=0.04 secs] 
Heap
 par new generation   total 118016K, used 77303K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
  eden space 104960K,  64% used [0x00000000c0000000, 0x00000000c422a9c0, 0x00000000c6680000)
  from space 13056K,  73% used [0x00000000c7340000, 0x00000000c7c93570, 0x00000000c8000000)
  to   space 13056K,   0% used [0x00000000c6680000, 0x00000000c6680000, 0x00000000c7340000)
 concurrent mark-sweep generation total 917504K, used 0K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 18261K, capacity 18612K, committed 18816K, reserved 1064960K
  class space    used 2246K, capacity 2360K, committed 2432K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-sipnamenode.novalocal.out.3 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals                 (-i) 128569
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-sipnamenode.novalocal.out.2 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals                 (-i) 128569
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-sipnamenode.novalocal.out.1 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals                 (-i) 128569
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-sipnamenode.novalocal.out <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals                 (-i) 128569
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201903150650 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 32948312k(25308796k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2019-03-15T06:50:07.432+0000: 1.074: [GC (Allocation Failure) 2019-03-15T06:50:07.432+0000: 1.074: [ParNew: 104960K->9545K(118016K), 0.0239215 secs] 104960K->9545K(1035520K), 0.0240610 secs] [Times: user=0.07 sys=0.01, real=0.02 secs] 
Heap
 par new generation   total 118016K, used 77366K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
  eden space 104960K,  64% used [0x00000000c0000000, 0x00000000c423b640, 0x00000000c6680000)
  from space 13056K,  73% used [0x00000000c7340000, 0x00000000c7c92470, 0x00000000c8000000)
  to   space 13056K,   0% used [0x00000000c6680000, 0x00000000c6680000, 0x00000000c7340000)
 concurrent mark-sweep generation total 917504K, used 0K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 18261K, capacity 18612K, committed 18816K, reserved 1064960K
  class space    used 2246K, capacity 2360K, committed 2432K, reserved 1048576K
2019-03-15 06:50:10,496 - call[('ambari-python-wrap', u'/usr/bin/hdp-select', 'versions')] {}
2019-03-15 06:50:10,526 - call returned (0, '2.6.2.0-205\n2.6.5.1050-37')
2019-03-15 06:50:10,526 - The 'hadoop-hdfs-namenode' component did not advertise a version. This may indicate a problem with the component packaging.

Command failed after 1 tries


2 REPLIES 2

Re: NameNode can not Start

Super Mentor

@abraham fikire

We see:

java.net.BindException: Port in use: sipnamenode.novalocal:50070
    at org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1000)
    at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1023)


Please check the NameNode host to findout which other process is already using port 50070. You are getting PortBind Exception and thats the reason the NameNode process is noit able to bind itself on 50070 port.

# netstat -tnlpa | grep 50070
# kill -9 $PID_WHICH_IS_USING_50070

.

Also make sure that the "/etc/hosts" file is correct and you are not assiginign multiple Hostnames to your NameNode IP Address.



Re: NameNode can not Start

New Contributor

su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh start namenode"

-bash: /usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh: No such file or directory