Support Questions
Find answers, ask questions, and share your expertise

unable to start the name node server

New Contributor

Here are the log messeges please help me out

stderr:

Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 420, in <module> NameNode().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute method(env) File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 101, in start upgrade_suspended=params.upgrade_suspended, env=env) File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk return fn(*args, **kwargs) File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 156, in namenode create_log_dir=True File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py", line 269, in service Execute(daemon_cmd, not_if=process_id_exists_command, environment=hadoop_env_exports) File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 273, in action_run tries=self.resource.tries, try_sleep=self.resource.try_sleep) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner result = function(command, **kwargs) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call tries=tries, try_sleep=try_sleep) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper result = _call(command, **kwargs_copy) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 293, in _call raise ExecutionFailed(err_msg, code, out, err) resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start namenode'' returned 1. starting namenode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-namenode-LON-HADOOP-02.southeastasia.cloudapp.azure.com.out

stdout:

2017-11-09 07:29:26,763 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.3.0-37 2017-11-09 07:29:26,765 - Checking if need to create versioned conf dir /etc/hadoop/2.5.3.0-37/0 2017-11-09 07:29:26,767 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1} 2017-11-09 07:29:26,796 - call returned (1, '/etc/hadoop/2.5.3.0-37/0 exist already', '') 2017-11-09 07:29:26,797 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False} 2017-11-09 07:29:26,819 - checked_call returned (0, '') 2017-11-09 07:29:26,819 - Ensuring that hadoop has the correct symlink structure 2017-11-09 07:29:26,820 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2017-11-09 07:29:26,950 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.3.0-37 2017-11-09 07:29:26,951 - Checking if need to create versioned conf dir /etc/hadoop/2.5.3.0-37/0 2017-11-09 07:29:26,953 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1} 2017-11-09 07:29:26,977 - call returned (1, '/etc/hadoop/2.5.3.0-37/0 exist already', '') 2017-11-09 07:29:26,977 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False} 2017-11-09 07:29:26,999 - checked_call returned (0, '') 2017-11-09 07:29:27,000 - Ensuring that hadoop has the correct symlink structure 2017-11-09 07:29:27,000 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2017-11-09 07:29:27,001 - Group['livy'] {} 2017-11-09 07:29:27,003 - Group['spark'] {} 2017-11-09 07:29:27,003 - Group['hadoop'] {} 2017-11-09 07:29:27,003 - Group['users'] {} 2017-11-09 07:29:27,003 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-11-09 07:29:27,004 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-11-09 07:29:27,005 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-11-09 07:29:27,006 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-11-09 07:29:27,006 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-11-09 07:29:27,007 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-11-09 07:29:27,008 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']} 2017-11-09 07:29:27,008 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']} 2017-11-09 07:29:27,009 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-11-09 07:29:27,010 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-11-09 07:29:27,010 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-11-09 07:29:27,011 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-11-09 07:29:27,012 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-11-09 07:29:27,014 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2017-11-09 07:29:27,019 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if 2017-11-09 07:29:27,019 - Group['hdfs'] {} 2017-11-09 07:29:27,019 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']} 2017-11-09 07:29:27,020 - FS Type: 2017-11-09 07:29:27,020 - Directory['/etc/hadoop'] {'mode': 0755} 2017-11-09 07:29:27,033 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2017-11-09 07:29:27,034 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777} 2017-11-09 07:29:27,046 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'} 2017-11-09 07:29:27,056 - Skipping Execute[('setenforce', '0')] due to only_if 2017-11-09 07:29:27,057 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'} 2017-11-09 07:29:27,058 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'} 2017-11-09 07:29:27,059 - Changing owner for /var/run/hadoop from 1025 to root 2017-11-09 07:29:27,059 - Changing group for /var/run/hadoop from 1019 to root 2017-11-09 07:29:27,059 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'} 2017-11-09 07:29:27,063 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'} 2017-11-09 07:29:27,065 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'} 2017-11-09 07:29:27,066 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644} 2017-11-09 07:29:27,077 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'} 2017-11-09 07:29:27,077 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755} 2017-11-09 07:29:27,078 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'} 2017-11-09 07:29:27,082 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'} 2017-11-09 07:29:27,086 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755} 2017-11-09 07:29:27,240 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.3.0-37 2017-11-09 07:29:27,242 - Checking if need to create versioned conf dir /etc/hadoop/2.5.3.0-37/0 2017-11-09 07:29:27,244 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1} 2017-11-09 07:29:27,266 - call returned (1, '/etc/hadoop/2.5.3.0-37/0 exist already', '') 2017-11-09 07:29:27,266 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False} 2017-11-09 07:29:27,288 - checked_call returned (0, '') 2017-11-09 07:29:27,289 - Ensuring that hadoop has the correct symlink structure 2017-11-09 07:29:27,289 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2017-11-09 07:29:27,290 - Stack Feature Version Info: stack_version=2.5, version=2.5.3.0-37, current_cluster_version=2.5.3.0-37 -> 2.5.3.0-37 2017-11-09 07:29:27,305 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.3.0-37 2017-11-09 07:29:27,307 - Checking if need to create versioned conf dir /etc/hadoop/2.5.3.0-37/0 2017-11-09 07:29:27,309 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1} 2017-11-09 07:29:27,331 - call returned (1, '/etc/hadoop/2.5.3.0-37/0 exist already', '') 2017-11-09 07:29:27,332 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False} 2017-11-09 07:29:27,353 - checked_call returned (0, '') 2017-11-09 07:29:27,354 - Ensuring that hadoop has the correct symlink structure 2017-11-09 07:29:27,354 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2017-11-09 07:29:27,365 - checked_call['rpm -q --queryformat '%{version}-%{release}' hdp-select | sed -e 's/\.el[0-9]//g''] {'stderr': -1} 2017-11-09 07:29:27,400 - checked_call returned (0, '2.5.3.0-37', '') 2017-11-09 07:29:27,404 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'} 2017-11-09 07:29:27,409 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644} 2017-11-09 07:29:27,410 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...} 2017-11-09 07:29:27,420 - Generating config: /usr/hdp/current/hadoop-client/conf/hadoop-policy.xml 2017-11-09 07:29:27,420 - File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2017-11-09 07:29:27,429 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...} 2017-11-09 07:29:27,436 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-client.xml 2017-11-09 07:29:27,436 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2017-11-09 07:29:27,442 - Directory['/usr/hdp/current/hadoop-client/conf/secure'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'} 2017-11-09 07:29:27,443 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf/secure', 'configuration_attributes': {}, 'configurations': ...} 2017-11-09 07:29:27,450 - Generating config: /usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml 2017-11-09 07:29:27,450 - File['/usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2017-11-09 07:29:27,456 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...} 2017-11-09 07:29:27,463 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-server.xml 2017-11-09 07:29:27,463 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2017-11-09 07:29:27,470 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true'}}, 'configurations': ...} 2017-11-09 07:29:27,477 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml 2017-11-09 07:29:27,477 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2017-11-09 07:29:27,521 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'hdfs', 'configurations': ...} 2017-11-09 07:29:27,528 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml 2017-11-09 07:29:27,528 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2017-11-09 07:29:27,548 - File['/usr/hdp/current/hadoop-client/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'hdfs'} 2017-11-09 07:29:27,551 - Directory['/hadoop/hdfs/hadoop/hdfs/namenode'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'} 2017-11-09 07:29:27,553 - Called service start with upgrade_type: None 2017-11-09 07:29:27,554 - Ranger admin not installed 2017-11-09 07:29:27,554 - /hadoop/hdfs/hadoop/hdfs/namenode/namenode-formatted/ exists. Namenode DFS already formatted 2017-11-09 07:29:27,554 - Directory['/hadoop/hdfs/hadoop/hdfs/namenode/namenode-formatted/'] {'create_parents': True} 2017-11-09 07:29:27,555 - File['/etc/hadoop/conf/dfs.exclude'] {'owner': 'hdfs', 'content': Template('exclude_hosts_list.j2'), 'group': 'hadoop'} 2017-11-09 07:29:27,556 - Options for start command are: 2017-11-09 07:29:27,556 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755} 2017-11-09 07:29:27,556 - Changing owner for /var/run/hadoop from 0 to hdfs 2017-11-09 07:29:27,557 - Changing group for /var/run/hadoop from 0 to hadoop 2017-11-09 07:29:27,557 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True} 2017-11-09 07:29:27,557 - Directory['/var/log/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True} 2017-11-09 07:29:27,558 - File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'] {'action': ['delete'], 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'} 2017-11-09 07:29:27,570 - Deleting File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'] 2017-11-09 07:29:27,570 - Execute['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start namenode''] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/current/hadoop-client/libexec'}, 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'} 2017-11-09 07:29:31,670 - Execute['find /var/log/hadoop/hdfs -maxdepth 1 -type f -name '*' -exec echo '==> {} <==' \; -exec tail -n 40 {} \;'] {'logoutput': True, 'ignore_failures': True, 'user': 'hdfs'} ==> /var/log/hadoop/hdfs/gc.log-201711090635 <== Java HotSpot(TM) 64-Bit Server VM (25.77-b03) for linux-amd64 JRE (1.8.0_77-b03), built on Mar 20 2016 22:00:46 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8) Memory: 4k page, physical 57704496k(50979720k free), swap 0k(0k free) CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 2017-11-09T06:35:16.201+0000: 1.154: [GC (Allocation Failure) 2017-11-09T06:35:16.201+0000: 1.154: [ParNew: 104960K->11615K(118016K), 0.0185932 secs] 104960K->11615K(1035520K), 0.0186763 secs] [Times: user=0.05 sys=0.00, real=0.02 secs] Heap par new generation total 118016K, used 27998K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000) eden space 104960K, 15% used [0x00000000c0000000, 0x00000000c0fffb98, 0x00000000c6680000) from space 13056K, 88% us2017-11-09T06:35:11.830+0000: 3.387: [CMS-concurrent-abortable-preclean-start] CMS: abort preclean due to time 2017-11-09T06:35:16.848+0000: 8.405: [CMS-concurrent-abortable-preclean: 1.501/5.018 secs] [Times: user=1.51 sys=0.00, real=5.02 secs] 2017-11-09T06:35:16.848+0000: 8.405: [GC (CMS Final Remark) [YG occupancy: 98742 K (184320 K)]2017-11-09T06:35:16.848+0000: 8.405: [Rescan (parallel) , 0.0063721 secs]2017-11-09T06:35:16.855+0000: 8.412: [weak refs processing, 0.0000190 secs]2017-11-09T06:35:16.855+0000: 8.412: [class unloading, 0.0028047 secs]2017-11-09T06:35:16.857+0000: 8.415: [scrub symbol table, 0.0025351 secs]2017-11-09T06:35:16.860+0000: 8.417: [scrub string table, 0.0004282 secs][1 CMS-remark: 0K(843776K)] 98742K(1028096K), 0.0127806 secs] [Times: user=0.03 sys=0.00, real=0.01 secs] 2017-11-09T06:35:16.861+0000: 8.418: [CMS-concurrent-sweep-start] 2017-11-09T06:35:16.861+0000: 8.418: [CMS-concurrent-sweep: 0.000/0.000 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 2017-11-09T06:35:16.861+0000: 8.418: [CMS-concurrent-reset-start] 2017-11-09T06:35:16.865+0000: 8.422: [CMS-concurrent-reset: 0.004/0.004 secs] [Times: user=0.01 sys=0.01, real=0.01 secs] 2017-11-09T06:55:50.256+0000: 1241.814: [GC (Allocation Failure) 2017-11-09T06:55:50.257+0000: 1241.814: [ParNew: 177623K->13487K(184320K), 0.0259235 secs] 177623K->18409K(1028096K), 0.0260052 secs] [Times: user=0.08 sys=0.01, real=0.03 secs] Heap par new generation total 184320K, used 79781K [0x00000000c0000000, 0x00000000cc800000, 0x00000000cc800000) eden space 163840K, 40% used [0x00000000c0000000, 0x00000000c40bdae0, 0x00000000ca000000) from space 20480K, 65% used [0x00000000ca000000, 0x00000000cad2bcd0, 0x00000000cb400000) to space 20480K, 0% used [0x00000000cb400000, 0x00000000cb400000, 0x00000000cc800000) concurrent mark-sweep generation total 843776K, used 4922K [0x00000000cc800000, 0x0000000100000000, 0x0000000100000000) Metaspace used 27733K, capacity 28162K, committed 28488K, reserved 1075200K class space used 3231K, capacity 3373K, committed 3400K, reserved 1048576K ==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-LON-HADOOP-02.southeastasia.cloudapp.azure.com.log <== 2017-11-09 07:29:23,660 INFO datanode.BlockScanner (BlockScanner.java:<init>(172)) - Initialized block scanner with targetBytesPerSec 1048576 2017-11-09 07:29:23,661 INFO datanode.DataNode (DataNode.java:<init>(437)) - File descriptor passing is enabled. 2017-11-09 07:29:23,661 INFO datanode.DataNode (DataNode.java:<init>(448)) - Configured hostname is lon-hadoop-02.southeastasia.cloudapp.azure.com 2017-11-09 07:29:23,666 INFO datanode.DataNode (DataNode.java:startDataNode(1211)) - Starting DataNode with maxLockedMemory = 0 2017-11-09 07:29:23,685 INFO datanode.DataNode (DataNode.java:initDataXceiver(1004)) - Opened streaming server at /0.0.0.0:50010 2017-11-09 07:29:23,687 INFO datanode.DataNode (DataXceiverServer.java:<init>(78)) - Balancing bandwith is 6250000 bytes/s 2017-11-09 07:29:23,687 INFO datanode.DataNode (DataXceiverServer.java:<init>(79)) - Number threads for balancing is 5 2017-11-09 07:29:23,690 INFO datanode.DataNode (DataXceiverServer.java:<init>(78)) - Balancing bandwith is 6250000 bytes/s 2017-11-09 07:29:23,690 INFO datanode.DataNode (DataXceiverServer.java:<init>(79)) - Number threads for balancing is 5 2017-11-09 07:29:23,690 INFO datanode.DataNode (DataNode.java:initDataXceiver(1019)) - Listening on UNIX domain socket: /var/lib/hadoop-hdfs/dn_socket 2017-11-09 07:29:23,758 INFO mortbay.log (Slf4jLog.java:info(67)) - Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2017-11-09 07:29:23,766 INFO server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(293)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. 2017-11-09 07:29:23,770 INFO http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) - Http request log for http.requests.datanode is not defined 2017-11-09 07:29:23,775 INFO http.HttpServer2 (HttpServer2.java:addGlobalFilter(754)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2017-11-09 07:29:23,778 INFO http.HttpServer2 (HttpServer2.java:addFilter(729)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode 2017-11-09 07:29:23,778 INFO http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2017-11-09 07:29:23,778 INFO http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2017-11-09 07:29:23,778 INFO security.HttpCrossOriginFilterInitializer (HttpCrossOriginFilterInitializer.java:initFilter(49)) - CORS filter not enabled. Please set hadoop.http.cross-origin.enabled to 'true' to enable it 2017-11-09 07:29:23,791 INFO http.HttpServer2 (HttpServer2.java:openListeners(959)) - Jetty bound to port 45584 2017-11-09 07:29:23,791 INFO mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26.hwx 2017-11-09 07:29:23,932 INFO mortbay.log (Slf4jLog.java:info(67)) - Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45584 2017-11-09 07:29:24,071 INFO web.DatanodeHttpServer (DatanodeHttpServer.java:start(233)) - Listening HTTP traffic on /0.0.0.0:50075 2017-11-09 07:29:24,073 INFO util.JvmPauseMonitor (JvmPauseMonitor.java:run(179)) - Starting JVM pause monitor 2017-11-09 07:29:24,179 INFO datanode.DataNode (DataNode.java:startDataNode(1228)) - dnUserName = hdfs 2017-11-09 07:29:24,180 INFO datanode.DataNode (DataNode.java:startDataNode(1229)) - supergroup = hdfs 2017-11-09 07:29:24,215 INFO ipc.CallQueueManager (CallQueueManager.java:<init>(75)) - Using callQueue: class java.util.concurrent.LinkedBlockingQueue scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler 2017-11-09 07:29:24,229 INFO ipc.Server (Server.java:run(811)) - Starting Socket Reader #1 for port 8010 2017-11-09 07:29:24,253 INFO datanode.DataNode (DataNode.java:initIpcServer(917)) - Opened IPC server at /0.0.0.0:8010 2017-11-09 07:29:24,262 INFO datanode.DataNode (BlockPoolManager.java:refreshNamenodes(152)) - Refresh request received for nameservices: null 2017-11-09 07:29:24,281 INFO datanode.DataNode (BlockPoolManager.java:doRefreshNamenodes(201)) - Starting BPOfferServices for nameservices: <default> 2017-11-09 07:29:24,291 INFO datanode.DataNode (BPServiceActor.java:run(733)) - Block pool <registering> (Datanode Uuid unassigned) service to lon-hadoop-02.southeastasia.cloudapp.azure.com/13.76.173.149:8020 starting to offer service 2017-11-09 07:29:24,296 INFO ipc.Server (Server.java:run(1045)) - IPC Server Responder: starting 2017-11-09 07:29:24,296 INFO ipc.Server (Server.java:run(881)) - IPC Server listener on 8010: starting 2017-11-09 07:29:25,377 INFO ipc.Client (Client.java:handleConnectionFailure(904)) - Retrying connect to server: lon-hadoop-02.southeastasia.cloudapp.azure.com/13.76.173.149:8020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2017-11-09 07:29:26,379 INFO ipc.Client (Client.java:handleConnectionFailure(904)) - Retrying connect to server: lon-hadoop-02.southeastasia.cloudapp.azure.com/13.76.173.149:8020. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2017-11-09 07:29:27,381 INFO ipc.Client (Client.java:handleConnectionFailure(904)) - Retrying connect to server: lon-hadoop-02.southeastasia.cloudapp.azure.com/13.76.173.149:8020. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2017-11-09 07:29:28,382 INFO ipc.Client (Client.java:handleConnectionFailure(904)) - Retrying connect to server: lon-hadoop-02.southeastasia.cloudapp.azure.com/13.76.173.149:8020. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2017-11-09 07:29:29,383 INFO ipc.Client (Client.java:handleConnectionFailure(904)) - Retrying connect to server: lon-hadoop-02.southeastasia.cloudapp.azure.com/13.76.173.149:8020. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2017-11-09 07:29:30,385 INFO ipc.Client (Client.java:handleConnectionFailure(904)) - Retrying connect to server: lon-hadoop-02.southeastasia.cloudapp.azure.com/13.76.173.149:8020. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 2017-11-09 07:29:31,386 INFO ipc.Client (Client.java:handleConnectionFailure(904)) - Retrying connect to server: lon-hadoop-02.southeastasia.cloudapp.azure.com/13.76.173.149:8020. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) ==> /var/log/hadoop/hdfs/SecurityAuth.audit <== ==> /var/log/hadoop/hdfs/hdfs-audit.log <== ==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-LON-HADOOP-02.southeastasia.cloudapp.azure.com.log <== at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1754) Caused by: java.net.BindException: Cannot assign requested address at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:958) ... 8 more 2017-11-09 07:29:28,877 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(211)) - Stopping NameNode metrics system... 2017-11-09 07:29:28,878 INFO impl.MetricsSinkAdapter (MetricsSinkAdapter.java:publishMetricsFromQueue(141)) - timeline thread interrupted. 2017-11-09 07:29:28,879 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(217)) - NameNode metrics system stopped. 2017-11-09 07:29:28,879 INFO timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:run(416)) - Closing HadoopTimelineMetricSink. Flushing metrics to collector... 2017-11-09 07:29:28,879 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(606)) - NameNode metrics system shutdown complete. 2017-11-09 07:29:28,879 ERROR namenode.NameNode (NameNode.java:main(1759)) - Failed to start namenode. java.net.BindException: Port in use: lon-hadoop-02.southeastasia.cloudapp.azure.com:50070 at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:963) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:900) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:170) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:933) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:746) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:992) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:976) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1686) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1754) Caused by: java.net.BindException: Cannot assign requested address at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:958) ... 8 more 2017-11-09 07:29:28,880 INFO util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1 2017-11-09 07:29:28,881 INFO namenode.NameNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at lon-hadoop-02.southeastasia.cloudapp.azure.com/13.76.173.149 ************************************************************/ ==> /var/log/hadoop/hdfs/gc.log-201711090707 <== Java HotSpot(TM) 64-Bit Server VM (25.77-b03) for linux-amd64 JRE (1.8.0_77-b03), built on Mar 20 2016 22:00:46 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8) Memory: 4k page, physical 57704496k(50158332k free), swap 0k(0k free) CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 2017-11-09T07:07:09.560+0000: 1.100: [GC (Allocation Failure) 2017-11-09T07:07:09.560+0000: 1.100: [ParNew: 104960K->11635K(118016K), 0.0229932 secs] 104960K->11635K(1035520K), 0.0230872 secs] [Times: user=0.06 sys=0.00, real=0.03 secs] Heap par new generation total 118016K, used 27391K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000) eden space 104960K, 15% used [0x00000000c0000000, 0x00000000c0f630a0, 0x00000000c6680000) from space 13056K, 89% used [0x00000000c7340000, 0x00000000c7e9cc20, 0x00000000c8000000) to space 13056K, 0% used [0x00000000c6680000, 0x00000000c6680000, 0x00000000c7340000) concurrent mark-sweep generation total 917504K, used 0K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000) Metaspace used 16826K, capacity 17046K, committed 17280K, reserved 1064960K class space used 2047K, capacity 2161K, committed 2176K, reserved 1048576K ==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-LON-HADOOP-02.southeastasia.cloudapp.azure.com.out.1 <== ulimit -a for user hdfs core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 225325 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 128000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited ==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-LON-HADOOP-02.southeastasia.cloudapp.azure.com.out <== ulimit -a for user hdfs core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 225325 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 128000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited ==> /var/log/hadoop/hdfs/gc.log-201711090729 <== Java HotSpot(TM) 64-Bit Server VM (25.77-b03) for linux-amd64 JRE (1.8.0_77-b03), built on Mar 20 2016 22:00:46 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8) Memory: 4k page, physical 57704496k(50436324k free), swap 0k(0k free) CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 2017-11-09T07:29:28.800+0000: 1.112: [GC (Allocation Failure) 2017-11-09T07:29:28.800+0000: 1.112: [ParNew: 104960K->11613K(118016K), 0.0216674 secs] 104960K->11613K(1035520K), 0.0217489 secs] [Times: user=0.08 sys=0.00, real=0.03 secs] Heap par new generation total 118016K, used 27471K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000) eden space 104960K, 15% used [0x00000000c0000000, 0x00000000c0f7c778, 0x00000000c6680000) from space 13056K, 88% us2017-11-09T07:29:25.909+0000: 3.372: [CMS-concurrent-abortable-preclean-start] CMS: abort preclean due to time 2017-11-09T07:29:31.047+0000: 8.510: [CMS-concurrent-abortable-preclean: 1.518/5.138 secs] [Times: user=1.53 sys=0.00, real=5.14 secs] 2017-11-09T07:29:31.047+0000: 8.510: [GC (CMS Final Remark) [YG occupancy: 98771 K (184320 K)]2017-11-09T07:29:31.047+0000: 8.510: [Rescan (parallel) , 0.0062030 secs]2017-11-09T07:29:31.054+0000: 8.517: [weak refs processing, 0.0000204 secs]2017-11-09T07:29:31.054+0000: 8.517: [class unloading, 0.0027886 secs]2017-11-09T07:29:31.056+0000: 8.519: [scrub symbol table, 0.0032437 secs]2017-11-09T07:29:31.060+0000: 8.523: [scrub string table, 0.0005082 secs][1 CMS-remark: 0K(843776K)] 98771K(1028096K), 0.0133912 secs] [Times: user=0.03 sys=0.00, real=0.01 secs] 2017-11-09T07:29:31.061+0000: 8.524: [CMS-concurrent-sweep-start] 2017-11-09T07:29:31.061+0000: 8.524: [CMS-concurrent-sweep: 0.000/0.000 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 2017-11-09T07:29:31.061+0000: 8.524: [CMS-concurrent-reset-start] 2017-11-09T07:29:31.065+0000: 8.528: [CMS-concurrent-reset: 0.004/0.004 secs] [Times: user=0.00 sys=0.00, real=0.01 secs] ==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-LON-HADOOP-02.southeastasia.cloudapp.azure.com.out.2 <== ulimit -a for user hdfs core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 225325 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 128000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited ==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-LON-HADOOP-02.southeastasia.cloudapp.azure.com.out.1 <== ulimit -a for user hdfs core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 225325 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 128000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited ==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-LON-HADOOP-02.southeastasia.cloudapp.azure.com.out <== ulimit -a for user hdfs core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 225325 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 128000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Command failed after 1 tries

1 REPLY 1

Super Mentor

@Satish kallakuri

The error says Port is already in use.

 Failed to start namenode. java.net.BindException: Port in use: lon-hadoop-02.southeastasia.cloudapp.azure.com:50070 at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:963) at 

.

So please Do SSH to the host "lon-hadoop-02.southeastasia.cloudapp.azure.com" and find out if any ther process is using that port and then kill it.

# netstat -tnlpa | grep 50070
# kill -9 $PID_FROM_ABOVE_COMMAND

Then try to restart the NameNode again.

.

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.