Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

namenode start failed on ambari ui

Highlighted

namenode start failed on ambari ui

New Contributor

stderr: /var/lib/ambari-agent/data/errors-421.txt

2019-03-15 08:04:31,038 - The 'hadoop-hdfs-namenode' component did not advertise a version. This may indicate a problem with the component packaging.
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 348, in <module>
    NameNode().execute()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 375, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 90, in start
    upgrade_suspended=params.upgrade_suspended, env=env)
  File "/usr/lib/ambari-agent/lib/ambari_commons/os_family_impl.py", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 175, in namenode
    create_log_dir=True
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py", line 276, in service
    Execute(daemon_cmd, not_if=process_id_exists_command, environment=hadoop_env_exports)
  File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
    self.env.run()
  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 262, in action_run
    tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
    result = function(command, **kwargs)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
    tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 303, in _call
    raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/2.6.2.0-205/hadoop/sbin/hadoop-daemon.sh --config /usr/hdp/2.6.2.0-205/hadoop/conf start namenode'' returned 1. starting namenode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-namenode-sipnamenode.novalocal.out

stdout: /var/lib/ambari-agent/data/output-421.txt

2019-03-15 08:04:25,778 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.2.0-205 -> 2.6.2.0-205
2019-03-15 08:04:25,795 - Using hadoop conf dir: /usr/hdp/2.6.2.0-205/hadoop/conf
2019-03-15 08:04:25,959 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.2.0-205 -> 2.6.2.0-205
2019-03-15 08:04:25,964 - Using hadoop conf dir: /usr/hdp/2.6.2.0-205/hadoop/conf
2019-03-15 08:04:25,966 - Group['hdfs'] {}
2019-03-15 08:04:25,967 - Group['hadoop'] {}
2019-03-15 08:04:25,967 - Group['users'] {}
2019-03-15 08:04:25,968 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2019-03-15 08:04:25,969 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2019-03-15 08:04:25,969 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2019-03-15 08:04:25,970 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None}
2019-03-15 08:04:25,971 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2019-03-15 08:04:25,971 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2019-03-15 08:04:25,972 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-03-15 08:04:25,973 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2019-03-15 08:04:25,981 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2019-03-15 08:04:25,982 - Group['hdfs'] {}
2019-03-15 08:04:25,982 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', u'hdfs']}
2019-03-15 08:04:25,983 - FS Type: 
2019-03-15 08:04:25,983 - Directory['/etc/hadoop'] {'mode': 0755}
2019-03-15 08:04:26,004 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2019-03-15 08:04:26,005 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2019-03-15 08:04:26,024 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2019-03-15 08:04:26,041 - Skipping Execute[('setenforce', '0')] due to only_if
2019-03-15 08:04:26,042 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2019-03-15 08:04:26,046 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2019-03-15 08:04:26,047 - Changing owner for /var/run/hadoop from 1004 to root
2019-03-15 08:04:26,047 - Changing group for /var/run/hadoop from 1002 to root
2019-03-15 08:04:26,047 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2019-03-15 08:04:26,051 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2019-03-15 08:04:26,053 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2019-03-15 08:04:26,060 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2019-03-15 08:04:26,070 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2019-03-15 08:04:26,071 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2019-03-15 08:04:26,072 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2019-03-15 08:04:26,076 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2019-03-15 08:04:26,084 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2019-03-15 08:04:26,169 - call[('ambari-python-wrap', u'/usr/bin/hdp-select', 'versions')] {}
2019-03-15 08:04:26,195 - call returned (0, '2.6.2.0-205\n2.6.5.1050-37')
2019-03-15 08:04:26,397 - Using hadoop conf dir: /usr/hdp/2.6.2.0-205/hadoop/conf
2019-03-15 08:04:26,398 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.2.0-205 -> 2.6.2.0-205
2019-03-15 08:04:26,418 - Using hadoop conf dir: /usr/hdp/2.6.2.0-205/hadoop/conf
2019-03-15 08:04:26,433 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2019-03-15 08:04:26,438 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2019-03-15 08:04:26,439 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.2.0-205/hadoop/conf', 'configuration_attributes': {}, 'configurations': ...}
2019-03-15 08:04:26,448 - Generating config: /usr/hdp/2.6.2.0-205/hadoop/conf/hadoop-policy.xml
2019-03-15 08:04:26,449 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2019-03-15 08:04:26,457 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.2.0-205/hadoop/conf', 'configuration_attributes': {}, 'configurations': ...}
2019-03-15 08:04:26,465 - Generating config: /usr/hdp/2.6.2.0-205/hadoop/conf/ssl-client.xml
2019-03-15 08:04:26,465 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2019-03-15 08:04:26,471 - Directory['/usr/hdp/2.6.2.0-205/hadoop/conf/secure'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2019-03-15 08:04:26,471 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.2.0-205/hadoop/conf/secure', 'configuration_attributes': {}, 'configurations': ...}
2019-03-15 08:04:26,479 - Generating config: /usr/hdp/2.6.2.0-205/hadoop/conf/secure/ssl-client.xml
2019-03-15 08:04:26,479 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2019-03-15 08:04:26,485 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.2.0-205/hadoop/conf', 'configuration_attributes': {}, 'configurations': ...}
2019-03-15 08:04:26,492 - Generating config: /usr/hdp/2.6.2.0-205/hadoop/conf/ssl-server.xml
2019-03-15 08:04:26,492 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2019-03-15 08:04:26,499 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.2.0-205/hadoop/conf', 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true'}}, 'configurations': ...}
2019-03-15 08:04:26,506 - Generating config: /usr/hdp/2.6.2.0-205/hadoop/conf/hdfs-site.xml
2019-03-15 08:04:26,506 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2019-03-15 08:04:26,549 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.2.0-205/hadoop/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'hdfs', 'configurations': ...}
2019-03-15 08:04:26,557 - Generating config: /usr/hdp/2.6.2.0-205/hadoop/conf/core-site.xml
2019-03-15 08:04:26,557 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-03-15 08:04:26,578 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'hdfs'}
2019-03-15 08:04:26,578 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.2.0-205 -> 2.6.2.0-205
2019-03-15 08:04:26,584 - Directory['/hadoop/hdfs/namenode'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2019-03-15 08:04:26,584 - Skipping setting up secure ZNode ACL for HFDS as it's supported only for NameNode HA mode.
2019-03-15 08:04:26,587 - Called service start with upgrade_type: None
2019-03-15 08:04:26,588 - Ranger Hdfs plugin is not enabled
2019-03-15 08:04:26,589 - File['/etc/hadoop/conf/dfs.exclude'] {'owner': 'hdfs', 'content': Template('exclude_hosts_list.j2'), 'group': 'hadoop'}
2019-03-15 08:04:26,590 - /hadoop/hdfs/namenode/namenode-formatted/ exists. Namenode DFS already formatted
2019-03-15 08:04:26,590 - Directory['/hadoop/hdfs/namenode/namenode-formatted/'] {'create_parents': True}
2019-03-15 08:04:26,590 - Options for start command are: 
2019-03-15 08:04:26,590 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755}
2019-03-15 08:04:26,591 - Changing owner for /var/run/hadoop from 0 to hdfs
2019-03-15 08:04:26,591 - Changing group for /var/run/hadoop from 0 to hadoop
2019-03-15 08:04:26,591 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2019-03-15 08:04:26,591 - Directory['/var/log/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2019-03-15 08:04:26,592 - File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'] {'action': ['delete'], 'not_if': 'ambari-sudo.sh  -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh  -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'}
2019-03-15 08:04:26,611 - Deleting File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid']
2019-03-15 08:04:26,612 - Execute['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/2.6.2.0-205/hadoop/sbin/hadoop-daemon.sh --config /usr/hdp/2.6.2.0-205/hadoop/conf start namenode''] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/2.6.2.0-205/hadoop/libexec'}, 'not_if': 'ambari-sudo.sh  -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh  -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'}
2019-03-15 08:04:30,813 - Execute['find /var/log/hadoop/hdfs -maxdepth 1 -type f -name '*' -exec echo '==> {} <==' \; -exec tail -n 40 {} \;'] {'logoutput': True, 'ignore_failures': True, 'user': 'hdfs'}
==> /var/log/hadoop/hdfs/gc.log-201903150628 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 32948312k(25488852k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2019-03-15T06:28:52.955+0000: 1.012: [GC (GCLocker Initiated GC) 2019-03-15T06:28:52.955+0000: 1.012: [ParNew: 104960K->9633K(118016K), 0.0159947 secs] 104960K->9633K(1035520K), 0.0161429 secs] [Times: user=0.06 sys=0.00, real=0.02 secs] 
Heap
 par new generation   total 118016K, used 76324K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
  eden space 104960K,  63% used [0x00000000c0000000, 0x00000000c4120a48, 0x00000000c6680000)
  from space 13056K,  73% used [0x00000000c7340000, 0x00000000c7ca86e0, 0x00000000c8000000)
  to   space 13056K,   0% used [0x00000000c6680000, 0x00000000c6680000, 0x00000000c7340000)
 concurrent mark-sweep generation total 917504K, used 0K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 18260K, capacity 18612K, committed 18816K, reserved 1064960K
  class space    used 2246K, capacity 2360K, committed 2432K, reserved 1048576K
2019-03-15T06:30:05.713+0000: 85.667: [GC (Allocation Failure) 2019-03-15T06:30:05.713+0000: 85.667: [ParNew: 176325K->16223K(184320K), 0.0422067 secs] 176325K->20146K(1028096K), 0.0423734 secs] [Times: user=0.09 sys=0.01, real=0.04 secs] 
Heap
 par new generation   total 184320K, used 131867K [0x00000000c0000000, 0x00000000cc800000, 0x00000000cc800000)
  eden space 163840K,  70% used [0x00000000c0000000, 0x00000000c70eed50, 0x00000000ca000000)
  from space 20480K,  79% used [0x00000000ca000000, 0x00000000cafd7f30, 0x00000000cb400000)
  to   space 20480K,   0% used [0x00000000cb400000, 0x00000000cb400000, 0x00000000cc800000)
 concurrent mark-sweep generation total 843776K, used 3922K [0x00000000cc800000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 29079K, capacity 29430K, committed 29664K, reserved 1075200K
  class space    used 3445K, capacity 3526K, committed 3552K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-sipnamenode.novalocal.log <==
    at org.apache.hadoop.metrics2.sink.relocated.zookeeper.KeeperException.create(KeeperException.java:99)
    at org.apache.hadoop.metrics2.sink.relocated.zookeeper.KeeperException.create(KeeperException.java:51)
    at org.apache.hadoop.metrics2.sink.relocated.zookeeper.ZooKeeper.exists(ZooKeeper.java:1102)
    at org.apache.hadoop.metrics2.sink.relocated.zookeeper.ZooKeeper.exists(ZooKeeper.java:1130)
    at org.apache.hadoop.metrics2.sink.timeline.availability.MetricCollectorHAHelper.findLiveCollectorHostsFromZNode(MetricCollectorHAHelper.java:77)
    at org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink.findPreferredCollectHost(AbstractTimelineMetricsSink.java:434)
    at org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink.getCurrentCollectorHost(AbstractTimelineMetricsSink.java:273)
    at org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink.emitMetrics(AbstractTimelineMetricsSink.java:290)
    at org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink.putMetrics(HadoopTimelineMetricsSink.java:353)
    at org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:186)
    at org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:43)
    at org.apache.hadoop.metrics2.impl.SinkQueue.consumeAll(SinkQueue.java:87)
    at org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.publishMetricsFromQueue(MetricsSinkAdapter.java:134)
    at org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1.run(MetricsSinkAdapter.java:88)
2019-03-15 08:04:01,813 INFO  timeline.HadoopTimelineMetricsSink (AbstractTimelineMetricsSink.java:getCurrentCollectorHost(278)) - No live collector to send metrics to. Metrics to be sent will be discarded. This message will be skipped for the next 20 times.
2019-03-15 08:04:06,665 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 08:04:07,666 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 08:04:08,667 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 08:04:09,668 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 08:04:10,669 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 08:04:11,671 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 08:04:12,672 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 08:04:13,673 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 08:04:14,674 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 08:04:15,675 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 08:04:16,677 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 10 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 08:04:17,678 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 11 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 08:04:18,679 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 12 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 08:04:19,680 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 13 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 08:04:20,681 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 14 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 08:04:21,683 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 15 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 08:04:22,684 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 16 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 08:04:23,685 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 17 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 08:04:24,686 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 18 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 08:04:25,687 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 19 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 08:04:26,689 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 20 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 08:04:27,690 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 21 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 08:04:28,691 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 22 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 08:04:29,693 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 23 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 08:04:30,695 INFO  ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 24 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
==> /var/log/hadoop/hdfs/SecurityAuth.audit <==
==> /var/log/hadoop/hdfs/hdfs-audit.log <==
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-sipnamenode.novalocal.log <==
    at sun.nio.ch.Net.bind(Net.java:425)
    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
    at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
    at org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:988)
    at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1019)
    ... 9 more
2019-03-15 08:04:28,922 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(211)) - Stopping NameNode metrics system...
2019-03-15 08:04:28,923 INFO  impl.MetricsSinkAdapter (MetricsSinkAdapter.java:publishMetricsFromQueue(141)) - timeline thread interrupted.
2019-03-15 08:04:28,924 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(217)) - NameNode metrics system stopped.
2019-03-15 08:04:28,924 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(606)) - NameNode metrics system shutdown complete.
2019-03-15 08:04:28,924 ERROR namenode.NameNode (NameNode.java:main(1774)) - Failed to start namenode.
java.net.BindException: Port in use: sipnamenode.novalocal:50070
    at org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1000)
    at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1023)
    at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1080)
    at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:937)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:170)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:933)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:746)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:992)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:976)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1701)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1769)
Caused by: java.net.BindException: Cannot assign requested address
    at sun.nio.ch.Net.bind0(Native Method)
    at sun.nio.ch.Net.bind(Net.java:433)
    at sun.nio.ch.Net.bind(Net.java:425)
    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
    at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
    at org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:988)
    at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1019)
    ... 9 more
2019-03-15 08:04:28,925 INFO  util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1
2019-03-15 08:04:28,928 INFO  timeline.HadoopTimelineMetricsSink (AbstractTimelineMetricsSink.java:getCurrentCollectorHost(278)) - No live collector to send metrics to. Metrics to be sent will be discarded. This message will be skipped for the next 20 times.
2019-03-15 08:04:28,929 INFO  namenode.NameNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at sipnamenode.novalocal/10.0.35.134
************************************************************/
==> /var/log/hadoop/hdfs/gc.log-201903150635 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 32948312k(25422016k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2019-03-15T06:35:56.264+0000: 1.064: [GC (Allocation Failure) 2019-03-15T06:35:56.264+0000: 1.064: [ParNew: 104960K->9551K(118016K), 0.0271395 secs] 104960K->9551K(1035520K), 0.0273200 secs] [Times: user=0.09 sys=0.00, real=0.03 secs] 
Heap
 par new generation   total 118016K, used 77297K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
  eden space 104960K,  64% used [0x00000000c0000000, 0x00000000c4228530, 0x00000000c6680000)
  from space 13056K,  73% used [0x00000000c7340000, 0x00000000c7c93f60, 0x00000000c8000000)
  to   space 13056K,   0% used [0x00000000c6680000, 0x00000000c6680000, 0x00000000c7340000)
 concurrent mark-sweep generation total 917504K, used 0K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 18279K, capacity 18612K, committed 18816K, reserved 1064960K
  class space    used 2246K, capacity 2360K, committed 2432K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201903150638 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 32948312k(25420080k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2019-03-15T06:38:51.156+0000: 1.088: [GC (Allocation Failure) 2019-03-15T06:38:51.156+0000: 1.088: [ParNew: 104960K->9549K(118016K), 0.0323991 secs] 104960K->9549K(1035520K), 0.0325641 secs] [Times: user=0.10 sys=0.01, real=0.04 secs] 
Heap
 par new generation   total 118016K, used 77303K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
  eden space 104960K,  64% used [0x00000000c0000000, 0x00000000c422a9c0, 0x00000000c6680000)
  from space 13056K,  73% used [0x00000000c7340000, 0x00000000c7c93570, 0x00000000c8000000)
  to   space 13056K,   0% used [0x00000000c6680000, 0x00000000c6680000, 0x00000000c7340000)
 concurrent mark-sweep generation total 917504K, used 0K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 18261K, capacity 18612K, committed 18816K, reserved 1064960K
  class space    used 2246K, capacity 2360K, committed 2432K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201903150650 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 32948312k(25308796k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2019-03-15T06:50:07.432+0000: 1.074: [GC (Allocation Failure) 2019-03-15T06:50:07.432+0000: 1.074: [ParNew: 104960K->9545K(118016K), 0.0239215 secs] 104960K->9545K(1035520K), 0.0240610 secs] [Times: user=0.07 sys=0.01, real=0.02 secs] 
Heap
 par new generation   total 118016K, used 77366K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
  eden space 104960K,  64% used [0x00000000c0000000, 0x00000000c423b640, 0x00000000c6680000)
  from space 13056K,  73% used [0x00000000c7340000, 0x00000000c7c92470, 0x00000000c8000000)
  to   space 13056K,   0% used [0x00000000c6680000, 0x00000000c6680000, 0x00000000c7340000)
 concurrent mark-sweep generation total 917504K, used 0K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 18261K, capacity 18612K, committed 18816K, reserved 1064960K
  class space    used 2246K, capacity 2360K, committed 2432K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201903150703 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 32948312k(25609112k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=209715200 -XX:MaxTenuringThreshold=6 -XX:NewSize=209715200 -XX:OldPLABSize=16 -XX:ParallelGCThreads=4 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2019-03-15T07:03:34.107+0000: 1.510: [GC (Allocation Failure) 2019-03-15T07:03:34.107+0000: 1.510: [ParNew: 163840K->12490K(184320K), 0.0153197 secs] 163840K->12490K(1028096K), 0.0154370 secs] [Times: user=0.03 sys=0.00, real=0.02 secs] 
2019-03-15T07:03:36.125+0000: 3.528: [GC (CMS Initial Mark) [1 CMS-initial-mark: 0K(843776K)] 156930K(1028096K), 0.0241762 secs] [Times: user=0.08 sys=0.00, real=0.03 secs] 
2019-03-15T07:03:36.149+0000: 3.553: [CMS-concurrent-mark-start]
2019-03-15T07:03:36.160+0000: 3.563: [CMS-concurrent-mark: 0.010/0.010 secs] [Times: user=0.02 sys=0.00, real=0.01 secs] 
2019-03-15T07:03:36.160+0000: 3.563: [CMS-concurrent-preclean-start]
2019-03-15T07:03:36.161+0000: 3.565: [CMS-concurrent-preclean: 0.002/0.002 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] 
2019-03-15T07:03:36.161+0000: 3.565: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2019-03-15T07:03:41.208+0000: 8.612: [CMS-concurrent-abortable-preclean: 1.341/5.047 secs] [Times: user=1.78 sys=0.02, real=5.05 secs] 
2019-03-15T07:03:41.209+0000: 8.613: [GC (CMS Final Remark) [YG occupancy: 156930 K (184320 K)]2019-03-15T07:03:41.209+0000: 8.613: [Rescan (parallel) , 0.0155540 secs]2019-03-15T07:03:41.225+0000: 8.628: [weak refs processing, 0.0000407 secs]2019-03-15T07:03:41.225+0000: 8.628: [class unloading, 0.0080444 secs]2019-03-15T07:03:41.233+0000: 8.636: [scrub symbol table, 0.0044847 secs]2019-03-15T07:03:41.237+0000: 8.641: [scrub string table, 0.0005498 secs][1 CMS-remark: 0K(843776K)] 156930K(1028096K), 0.0293544 secs] [Times: user=0.08 sys=0.00, real=0.02 secs] 
2019-03-15T07:03:41.238+0000: 8.642: [CMS-concurrent-sweep-start]
2019-03-15T07:03:41.238+0000: 8.642: [CMS-concurrent-sweep: 0.000/0.000 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
2019-03-15T07:03:41.238+0000: 8.642: [CMS-concurrent-reset-start]
2019-03-15T07:03:41.243+0000: 8.647: [CMS-concurrent-reset: 0.004/0.004 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
2019-03-15T07:05:05.587+0000: 92.991: [GC (Allocation Failure) 2019-03-15T07:05:05.587+0000: 92.991: [ParNew: 176330K->15274K(184320K), 0.0371367 secs] 176330K->19229K(1028096K), 0.0372497 secs] [Times: user=0.09 sys=0.01, real=0.04 secs] 
==> /var/log/hadoop/hdfs/gc.log-201903150705 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 32948312k(25266048k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2019-03-15T07:05:27.250+0000: 1.081: [GC (GCLocker Initiated GC) 2019-03-15T07:05:27.250+0000: 1.081: [ParNew: 104960K->9552K(118016K), 0.0216132 secs] 104960K->9552K(1035520K), 0.0217476 secs] [Times: user=0.07 sys=0.01, real=0.02 secs] 
Heap
 par new generation   total 118016K, used 77289K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
  eden space 104960K,  64% used [0x00000000c0000000, 0x00000000c4226380, 0x00000000c6680000)
  from space 13056K,  73% used [0x00000000c7340000, 0x00000000c7c943e0, 0x00000000c8000000)
  to   space 13056K,   0% used [0x00000000c6680000, 0x00000000c6680000, 0x00000000c7340000)
 concurrent mark-sweep generation total 917504K, used 0K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 18265K, capacity 18612K, committed 18816K, reserved 1064960K
  class space    used 2246K, capacity 2360K, committed 2432K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-sipnamenode.novalocal.out.5 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals                 (-i) 128569
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201903150711 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 32948312k(25578676k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2019-03-15T07:11:19.940+0000: 1.067: [GC (Allocation Failure) 2019-03-15T07:11:19.941+0000: 1.067: [ParNew: 104960K->9565K(118016K), 0.0127459 secs] 104960K->9565K(1035520K), 0.0128711 secs] [Times: user=0.04 sys=0.01, real=0.01 secs] 
Heap
 par new generation   total 118016K, used 77381K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
  eden space 104960K,  64% used [0x00000000c0000000, 0x00000000c4239e00, 0x00000000c6680000)
  from space 13056K,  73% used [0x00000000c7340000, 0x00000000c7c97738, 0x00000000c8000000)
  to   space 13056K,   0% used [0x00000000c6680000, 0x00000000c6680000, 0x00000000c7340000)
 concurrent mark-sweep generation total 917504K, used 0K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 18287K, capacity 18612K, committed 18816K, reserved 1064960K
  class space    used 2246K, capacity 2360K, committed 2432K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-sipnamenode.novalocal.out.2 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals                 (-i) 128569
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-sipnamenode.novalocal.out.1 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals                 (-i) 128569
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-sipnamenode.novalocal.out <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals                 (-i) 128569
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201903150714 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 32948312k(25570272k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=209715200 -XX:MaxTenuringThreshold=6 -XX:NewSize=209715200 -XX:OldPLABSize=16 -XX:ParallelGCThreads=4 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2019-03-15T07:14:30.187+0000: 1.522: [GC (Allocation Failure) 2019-03-15T07:14:30.187+0000: 1.522: [ParNew: 163840K->12488K(184320K), 0.0137346 secs] 163840K->12488K(1028096K), 0.0138414 secs] [Times: user=0.04 sys=0.01, real=0.02 secs] 
2019-03-15T07:14:32.201+0000: 3.536: [GC (CMS Initial Mark) [1 CMS-initial-mark: 0K(843776K)] 153529K(1028096K), 0.0146112 secs] [Times: user=0.07 sys=0.00, real=0.01 secs] 
2019-03-15T07:14:32.216+0000: 3.551: [CMS-concurrent-mark-start]
2019-03-15T07:14:32.222+0000: 3.557: [CMS-concurrent-mark: 0.006/0.006 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] 
2019-03-15T07:14:32.222+0000: 3.557: [CMS-concurrent-preclean-start]
2019-03-15T07:14:32.223+0000: 3.558: [CMS-concurrent-preclean: 0.002/0.002 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] 
2019-03-15T07:14:32.223+0000: 3.558: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2019-03-15T07:14:37.337+0000: 8.672: [CMS-concurrent-abortable-preclean: 1.457/5.113 secs] [Times: user=2.03 sys=0.04, real=5.11 secs] 
2019-03-15T07:14:37.337+0000: 8.672: [GC (CMS Final Remark) [YG occupancy: 153529 K (184320 K)]2019-03-15T07:14:37.337+0000: 8.672: [Rescan (parallel) , 0.0201235 secs]2019-03-15T07:14:37.357+0000: 8.692: [weak refs processing, 0.0000361 secs]2019-03-15T07:14:37.357+0000: 8.692: [class unloading, 0.0054786 secs]2019-03-15T07:14:37.363+0000: 8.698: [scrub symbol table, 0.0038309 secs]2019-03-15T07:14:37.367+0000: 8.702: [scrub string table, 0.0005451 secs][1 CMS-remark: 0K(843776K)] 153529K(1028096K), 0.0306999 secs] [Times: user=0.08 sys=0.00, real=0.03 secs] 
2019-03-15T07:14:37.368+0000: 8.703: [CMS-concurrent-sweep-start]
2019-03-15T07:14:37.368+0000: 8.703: [CMS-concurrent-sweep: 0.000/0.000 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
2019-03-15T07:14:37.368+0000: 8.703: [CMS-concurrent-reset-start]
2019-03-15T07:14:37.372+0000: 8.707: [CMS-concurrent-reset: 0.004/0.004 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] 
2019-03-15T07:16:05.639+0000: 96.974: [GC (Allocation Failure) 2019-03-15T07:16:05.640+0000: 96.975: [ParNew: 176328K->17023K(184320K), 0.0591183 secs] 176328K->20944K(1028096K), 0.0592516 secs] [Times: user=0.10 sys=0.01, real=0.06 secs] 
==> /var/log/hadoop/hdfs/gc.log-201903150716 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 32948312k(25281092k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=5100273664 -XX:MaxHeapSize=5100273664 -XX:MaxNewSize=637534208 -XX:MaxTenuringThreshold=6 -XX:NewSize=637534208 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
Heap
 par new generation   total 560384K, used 278994K [0x0000000690000000, 0x00000006b6000000, 0x00000006b6000000)
  eden space 498176K,  56% used [0x0000000690000000, 0x00000006a1074bf8, 0x00000006ae680000)
  from space 62208K,   0% used [0x00000006ae680000, 0x00000006ae680000, 0x00000006b2340000)
  to   space 62208K,   0% used [0x00000006b2340000, 0x00000006b2340000, 0x00000006b6000000)
 concurrent mark-sweep generation total 4358144K, used 0K [0x00000006b6000000, 0x00000007c0000000, 0x00000007c0000000)
 Metaspace       used 18257K, capacity 18612K, committed 18816K, reserved 1064960K
  class space    used 2244K, capacity 2360K, committed 2432K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201903150730 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 32948312k(23962428k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=5100273664 -XX:MaxHeapSize=5100273664 -XX:MaxNewSize=637534208 -XX:MaxTenuringThreshold=6 -XX:NewSize=637534208 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
Heap
 par new generation   total 560384K, used 278995K [0x0000000690000000, 0x00000006b6000000, 0x00000006b6000000)
  eden space 498176K,  56% used [0x0000000690000000, 0x00000006a1074d58, 0x00000006ae680000)
  from space 62208K,   0% used [0x00000006ae680000, 0x00000006ae680000, 0x00000006b2340000)
  to   space 62208K,   0% used [0x00000006b2340000, 0x00000006b2340000, 0x00000006b6000000)
 concurrent mark-sweep generation total 4358144K, used 0K [0x00000006b6000000, 0x00000007c0000000, 0x00000007c0000000)
 Metaspace       used 18259K, capacity 18612K, committed 18816K, reserved 1064960K
  class space    used 2245K, capacity 2360K, committed 2432K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-sipnamenode.novalocal.out.4 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals                 (-i) 128569
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-sipnamenode.novalocal.out.3 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals                 (-i) 128569
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-sipnamenode.novalocal.out.2 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals                 (-i) 128569
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-sipnamenode.novalocal.out.1 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals                 (-i) 128569
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-sipnamenode.novalocal.out <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals                 (-i) 128569
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201903150804 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 32948312k(23790412k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=5100273664 -XX:MaxHeapSize=5100273664 -XX:MaxNewSize=637534208 -XX:MaxTenuringThreshold=6 -XX:NewSize=637534208 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
Heap
 par new generation   total 560384K, used 278994K [0x0000000690000000, 0x00000006b6000000, 0x00000006b6000000)
  eden space 498176K,  56% used [0x0000000690000000, 0x00000006a1074b90, 0x00000006ae680000)
  from space 62208K,   0% used [0x00000006ae680000, 0x00000006ae680000, 0x00000006b2340000)
  to   space 62208K,   0% used [0x00000006b2340000, 0x00000006b2340000, 0x00000006b6000000)
 concurrent mark-sweep generation total 4358144K, used 0K [0x00000006b6000000, 0x00000007c0000000, 0x00000007c0000000)
 Metaspace       used 18247K, capacity 18612K, committed 18816K, reserved 1064960K
  class space    used 2244K, capacity 2360K, committed 2432K, reserved 1048576K
2019-03-15 08:04:31,014 - call[('ambari-python-wrap', u'/usr/bin/hdp-select', 'versions')] {}
2019-03-15 08:04:31,038 - call returned (0, '2.6.2.0-205\n2.6.5.1050-37')
2019-03-15 08:04:31,038 - The 'hadoop-hdfs-namenode' component did not advertise a version. This may indicate a problem with the component packaging.

Command failed after 1 tries


Don't have an account?
Coming from Hortonworks? Activate your account here