Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Unable to start the Namenode, HDP 2.2.3

avatar
Explorer

Log :

2016-10-18 13:58:22,747 - Group['hadoop'] {} 2016-10-18 13:58:22,749 - Group['users'] {} 2016-10-18 13:58:22,749 - User['hive'] {'gid': 'hadoop', 'groups': ['hadoop']} 2016-10-18 13:58:22,897 - User['zookeeper'] {'gid': 'hadoop', 'groups': ['hadoop']} 2016-10-18 13:58:23,046 - User['ams'] {'gid': 'hadoop', 'groups': ['hadoop']} 2016-10-18 13:58:23,168 - User['ambari-qa'] {'gid': 'hadoop', 'groups': ['users']} 2016-10-18 13:58:23,293 - User['tez'] {'gid': 'hadoop', 'groups': ['users']} 2016-10-18 13:58:23,464 - User['hdfs'] {'gid': 'hadoop', 'groups': ['hadoop']} 2016-10-18 13:58:23,756 - User['sqoop'] {'gid': 'hadoop', 'groups': ['hadoop']} 2016-10-18 13:58:23,905 - User['yarn'] {'gid': 'hadoop', 'groups': ['hadoop']} 2016-10-18 13:58:24,071 - User['hcat'] {'gid': 'hadoop', 'groups': ['hadoop']} 2016-10-18 13:58:24,223 - User['mapred'] {'gid': 'hadoop', 'groups': ['hadoop']} 2016-10-18 13:58:24,388 - User['hbase'] {'gid': 'hadoop', 'groups': ['hadoop']} 2016-10-18 13:58:24,547 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2016-10-18 13:58:24,548 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2016-10-18 13:58:24,552 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if 2016-10-18 13:58:24,553 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'} 2016-10-18 13:58:24,554 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2016-10-18 13:58:24,555 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'} 2016-10-18 13:58:24,559 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if 2016-10-18 13:58:24,559 - Group['hdfs'] {'ignore_failures': False} 2016-10-18 13:58:24,559 - User['hdfs'] {'ignore_failures': False, 'groups': ['hadoop', 'hdfs']} 2016-10-18 13:58:24,698 - Directory['/etc/hadoop'] {'mode': 0755} 2016-10-18 13:58:24,709 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2016-10-18 13:58:24,710 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777} 2016-10-18 13:58:24,722 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'} 2016-10-18 13:58:24,728 - Skipping Execute[('setenforce', '0')] due to not_if 2016-10-18 13:58:24,728 - Directory['/var/log/hadoop'] {'owner': 'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'} 2016-10-18 13:58:24,731 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'} 2016-10-18 13:58:24,731 - Changing owner for /var/run/hadoop from 54372 to root 2016-10-18 13:58:24,731 - Changing group for /var/run/hadoop from 54343 to root 2016-10-18 13:58:24,731 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True, 'cd_access': 'a'} 2016-10-18 13:58:24,735 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'} 2016-10-18 13:58:24,737 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'} 2016-10-18 13:58:24,737 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644} 2016-10-18 13:58:24,744 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'} 2016-10-18 13:58:24,745 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755} 2016-10-18 13:58:24,746 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'} 2016-10-18 13:58:24,750 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'} 2016-10-18 13:58:24,754 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755} 2016-10-18 13:58:24,904 - Directory['/etc/security/limits.d'] {'owner': 'root', 'group': 'root', 'recursive': True} 2016-10-18 13:58:24,910 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644} 2016-10-18 13:58:24,911 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...} 2016-10-18 13:58:24,919 - Generating config: /usr/hdp/current/hadoop-client/conf/hadoop-policy.xml 2016-10-18 13:58:24,919 - File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2016-10-18 13:58:24,927 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...} 2016-10-18 13:58:24,934 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-client.xml 2016-10-18 13:58:24,934 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2016-10-18 13:58:24,939 - Directory['/usr/hdp/current/hadoop-client/conf/secure'] {'owner': 'root', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'} 2016-10-18 13:58:24,940 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf/secure', 'configuration_attributes': {}, 'configurations': ...} 2016-10-18 13:58:24,947 - Generating config: /usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml 2016-10-18 13:58:24,947 - File['/usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2016-10-18 13:58:24,952 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...} 2016-10-18 13:58:24,960 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-server.xml 2016-10-18 13:58:24,960 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2016-10-18 13:58:24,966 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...} 2016-10-18 13:58:24,972 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml 2016-10-18 13:58:24,973 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2016-10-18 13:58:25,009 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...} 2016-10-18 13:58:25,018 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml 2016-10-18 13:58:25,018 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2016-10-18 13:58:25,034 - File['/usr/hdp/current/hadoop-client/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'hdfs'} 2016-10-18 13:58:25,035 - Directory['/opt/hadoop/hdfs/namenode'] {'owner': 'hdfs', 'cd_access': 'a', 'group': 'hadoop', 'recursive': True, 'mode': 0755} 2016-10-18 13:58:25,035 - Directory['/tmp/hadoop/hdfs/namenode'] {'owner': 'hdfs', 'cd_access': 'a', 'group': 'hadoop', 'recursive': True, 'mode': 0755} 2016-10-18 13:58:25,036 - Directory['/var/hadoop/hdfs/namenode'] {'owner': 'hdfs', 'cd_access': 'a', 'group': 'hadoop', 'recursive': True, 'mode': 0755} 2016-10-18 13:58:25,036 - Directory['/var/log/hadoop/hdfs/namenode'] {'owner': 'hdfs', 'cd_access': 'a', 'group': 'hadoop', 'recursive': True, 'mode': 0755} 2016-10-18 13:58:25,036 - Directory['/var/log/audit/hadoop/hdfs/namenode'] {'owner': 'hdfs', 'recursive': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'} 2016-10-18 13:58:25,037 - Ranger admin not installed /opt/hadoop/hdfs/namenode/namenode-formatted/ exists. Namenode DFS already formatted /tmp/hadoop/hdfs/namenode/namenode-formatted/ exists. Namenode DFS already formatted /var/hadoop/hdfs/namenode/namenode-formatted/ exists. Namenode DFS already formatted /var/log/hadoop/hdfs/namenode/namenode-formatted/ exists. Namenode DFS already formatted /var/log/audit/hadoop/hdfs/namenode/namenode-formatted/ exists. Namenode DFS already formatted 2016-10-18 13:58:25,037 - Directory['/opt/hadoop/hdfs/namenode/namenode-formatted/'] {'recursive': True} 2016-10-18 13:58:25,038 - Directory['/tmp/hadoop/hdfs/namenode/namenode-formatted/'] {'recursive': True} 2016-10-18 13:58:25,038 - Directory['/var/hadoop/hdfs/namenode/namenode-formatted/'] {'recursive': True} 2016-10-18 13:58:25,038 - Directory['/var/log/hadoop/hdfs/namenode/namenode-formatted/'] {'recursive': True} 2016-10-18 13:58:25,038 - Directory['/var/log/audit/hadoop/hdfs/namenode/namenode-formatted/'] {'recursive': True} 2016-10-18 13:58:25,040 - File['/etc/hadoop/conf/dfs.exclude'] {'owner': 'hdfs', 'content': Template('exclude_hosts_list.j2'), 'group': 'hadoop'} 2016-10-18 13:58:25,040 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755} 2016-10-18 13:58:25,041 - Changing owner for /var/run/hadoop from 0 to hdfs 2016-10-18 13:58:25,041 - Changing group for /var/run/hadoop from 0 to hadoop 2016-10-18 13:58:25,041 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'recursive': True} 2016-10-18 13:58:25,041 - Directory['/var/log/hadoop/hdfs'] {'owner': 'hdfs', 'recursive': True} 2016-10-18 13:58:25,042 - File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'] {'action': ['delete'], 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'} 2016-10-18 13:58:25,046 - Execute['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start namenode''] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/current/hadoop-client/libexec'}, 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'}

1 ACCEPTED SOLUTION

avatar
Explorer

I have installed and uninstalled ambari so many times coz of that UUID and clusterd id is different from namenode and datanodes. thats why namenode is not started and datanodes too.

So I did

rm -rf /opt/hadoop/hdfs/data/*

/tmp/hadoop/hdfs/data/*,

/var/hadoop/hdfs/data,

/var/log/hadoop/hdfs/data,

/var/log/audit/hadoop/hdfs/data/*

sudo -u hdfs

/usr/hdp/2.3.6.0-3796/hadoop/sbin/hadoop-daemon.sh --config /etc/hadoop/conf start namenode

su root

/usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode

By doing this I fixed my issue. Hopefull it will help someone.

Thanks

View solution in original post

4 REPLIES 4

avatar
Super Guru
@kiran thella

Which file is this log from? Is this the file under /var/log/hadoop/ folder?

Thanks

avatar
Explorer

@mqureshi /var/log/ambari-server

avatar
Super Guru

Those are Ambari server logs. If your namenode is down, then can you please share namenode logs? Should be under /var/log/hadoop

avatar
Explorer

I have installed and uninstalled ambari so many times coz of that UUID and clusterd id is different from namenode and datanodes. thats why namenode is not started and datanodes too.

So I did

rm -rf /opt/hadoop/hdfs/data/*

/tmp/hadoop/hdfs/data/*,

/var/hadoop/hdfs/data,

/var/log/hadoop/hdfs/data,

/var/log/audit/hadoop/hdfs/data/*

sudo -u hdfs

/usr/hdp/2.3.6.0-3796/hadoop/sbin/hadoop-daemon.sh --config /etc/hadoop/conf start namenode

su root

/usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode

By doing this I fixed my issue. Hopefull it will help someone.

Thanks