Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Hbase don't restart when we install new service

Highlighted

Hbase don't restart when we install new service

New Contributor

hello,

We have a problem when we want restart Hbase Service.

This is a new install

stderr:   /var/lib/ambari-agent/data/errors-331.txt


Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_master.py", line 157, in <module>
    HbaseMaster().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
    method(env)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 720, in restart
    self.start(env, upgrade_type=upgrade_type)
  File "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_master.py", line 84, in start
    self.configure(env) # for security
  File "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_master.py", line 39, in configure
    hbase(name='master')
  File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase.py", line 199, in hbase
    owner=params.hbase_user
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 459, in action_create_on_execute
    self.action_delayed("create")
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 456, in action_delayed
    self.get_hdfs_resource_executor().action_delayed(action_name, self)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 247, in action_delayed
    self._assert_valid()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 231, in _assert_valid
    self.target_status = self._get_file_status(target)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 292, in _get_file_status
    list_status = self.util.run_command(target, 'GETFILESTATUS', method='GET', ignore_status_codes=['404'], assertable_result=False)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 179, in run_command
    _, out, err = get_user_call_output(cmd, user=self.run_user, logoutput=self.logoutput, quiet=False)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/get_user_call_output.py", line 61, in get_user_call_output
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w '%{http_code}' -X GET 'http://trezastoto.titi:50070/webhdfs/v1/apps/hbase/data?op=GETFILESTATUS&user.name=hdfs' 1>/var/tmp/tmpNtsxcq 2>/var/tmp/tmpZxLNSI' returned 7. curl: (7) Failed connect to trezastoto.titi:50070; Connection refused
000
stdout:   /var/lib/ambari-agent/data/output-331.txt


2017-04-07 11:48:40,602 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245
2017-04-07 11:48:40,604 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0
2017-04-07 11:48:40,607 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2017-04-07 11:48:40,652 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '')
2017-04-07 11:48:40,652 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2017-04-07 11:48:40,691 - checked_call returned (0, '')
2017-04-07 11:48:40,692 - Ensuring that hadoop has the correct symlink structure
2017-04-07 11:48:40,692 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-04-07 11:48:41,019 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245
2017-04-07 11:48:41,022 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0
2017-04-07 11:48:41,025 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2017-04-07 11:48:41,062 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '')
2017-04-07 11:48:41,062 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2017-04-07 11:48:41,099 - checked_call returned (0, '')
2017-04-07 11:48:41,100 - Ensuring that hadoop has the correct symlink structure
2017-04-07 11:48:41,100 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-04-07 11:48:41,102 - Skipping creation of User and Group as host is sys prepped or ignore_groupsusers_create flag is on
2017-04-07 11:48:41,102 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-04-07 11:48:41,245 - Skipping setting uid for hbase user as host is sys prepped
2017-04-07 11:48:41,245 - FS Type: 
2017-04-07 11:48:41,245 - Directory['/etc/hadoop'] {'mode': 0755}
2017-04-07 11:48:41,311 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-04-07 11:48:41,394 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-04-07 11:48:41,475 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2017-04-07 11:48:41,483 - Skipping Execute[('setenforce', '0')] due to not_if
2017-04-07 11:48:41,483 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2017-04-07 11:48:41,544 - Changing group for /var/log/hadoop from 0 to hadoop
2017-04-07 11:48:41,580 - Changing permission for /var/log/hadoop from 777 to 775
2017-04-07 11:48:41,697 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2017-04-07 11:48:41,845 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2017-04-07 11:48:41,966 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2017-04-07 11:48:42,054 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2017-04-07 11:48:42,134 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2017-04-07 11:48:42,250 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2017-04-07 11:48:42,328 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2017-04-07 11:48:42,421 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2017-04-07 11:48:42,488 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2017-04-07 11:48:42,569 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2017-04-07 11:48:43,020 - Stack Feature Version Info: stack_version=2.5, version=2.5.0.0-1245, current_cluster_version=2.5.0.0-1245 -> 2.5.0.0-1245
2017-04-07 11:48:43,033 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245
2017-04-07 11:48:43,036 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0
2017-04-07 11:48:43,039 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2017-04-07 11:48:43,076 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '')
2017-04-07 11:48:43,076 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2017-04-07 11:48:43,113 - checked_call returned (0, '')
2017-04-07 11:48:43,113 - Ensuring that hadoop has the correct symlink structure
2017-04-07 11:48:43,113 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-04-07 11:48:43,119 - checked_call['hostid'] {}
2017-04-07 11:48:43,123 - checked_call returned (0, '7a641ef1')
2017-04-07 11:48:43,129 - Execute['/usr/hdp/current/hbase-master/bin/hbase-daemon.sh --config /usr/hdp/current/hbase-master/conf stop master'] {'only_if': 'ambari-sudo.sh  -H -E test -f /var/run/hbase/hbase-hbase-master.pid && ps -p `ambari-sudo.sh  -H -E cat /var/run/hbase/hbase-hbase-master.pid` >/dev/null 2>&1', 'on_timeout': '! ( ambari-sudo.sh  -H -E test -f /var/run/hbase/hbase-hbase-master.pid && ps -p `ambari-sudo.sh  -H -E cat /var/run/hbase/hbase-hbase-master.pid` >/dev/null 2>&1 ) || ambari-sudo.sh -H -E kill -9 `ambari-sudo.sh  -H -E cat /var/run/hbase/hbase-hbase-master.pid`', 'timeout': 30, 'user': 'hbase'}
2017-04-07 11:48:43,145 - Skipping Execute['/usr/hdp/current/hbase-master/bin/hbase-daemon.sh --config /usr/hdp/current/hbase-master/conf stop master'] due to only_if
2017-04-07 11:48:43,145 - File['/var/run/hbase/hbase-hbase-master.pid'] {'action': ['delete']}
2017-04-07 11:48:43,181 - Pid file /var/run/hbase/hbase-hbase-master.pid is empty or does not exist
2017-04-07 11:48:43,184 - Directory['/etc/hbase'] {'mode': 0755}
2017-04-07 11:48:43,230 - Directory['/usr/hdp/current/hbase-master/conf'] {'owner': 'hbase', 'group': 'hadoop', 'create_parents': True}
2017-04-07 11:48:43,277 - Changing owner for /usr/hdp/current/hbase-master/conf from 0 to hbase
2017-04-07 11:48:43,277 - Changing group for /usr/hdp/current/hbase-master/conf from 0 to hadoop
2017-04-07 11:48:43,293 - Directory['/hadoop/work/tmp'] {'create_parents': True, 'mode': 0777}
2017-04-07 11:48:43,338 - Changing permission for /hadoop/work/tmp from 1777 to 777
2017-04-07 11:48:43,353 - Directory['/hadoop/work/tmp'] {'create_parents': True, 'cd_access': 'a'}
2017-04-07 11:48:43,474 - Execute[('chmod', '1777', u'/hadoop/work/tmp')] {'sudo': True}
2017-04-07 11:48:43,490 - XmlConfig['hbase-site.xml'] {'owner': 'hbase', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hbase-master/conf', 'configuration_attributes': {}, 'configurations': ...}
2017-04-07 11:48:43,504 - Generating config: /usr/hdp/current/hbase-master/conf/hbase-site.xml
2017-04-07 11:48:43,504 - File['/usr/hdp/current/hbase-master/conf/hbase-site.xml'] {'owner': 'hbase', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-04-07 11:48:43,621 - XmlConfig['core-site.xml'] {'owner': 'hbase', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hbase-master/conf', 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'configurations': ...}
2017-04-07 11:48:43,637 - Generating config: /usr/hdp/current/hbase-master/conf/core-site.xml
2017-04-07 11:48:43,637 - File['/usr/hdp/current/hbase-master/conf/core-site.xml'] {'owner': 'hbase', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-04-07 11:48:43,741 - XmlConfig['hdfs-site.xml'] {'owner': 'hbase', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hbase-master/conf', 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true'}}, 'configurations': ...}
2017-04-07 11:48:43,752 - Generating config: /usr/hdp/current/hbase-master/conf/hdfs-site.xml
2017-04-07 11:48:43,752 - File['/usr/hdp/current/hbase-master/conf/hdfs-site.xml'] {'owner': 'hbase', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-04-07 11:48:43,878 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true'}}, 'configurations': ...}
2017-04-07 11:48:43,889 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml
2017-04-07 11:48:43,889 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-04-07 11:48:44,016 - XmlConfig['hbase-policy.xml'] {'owner': 'hbase', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hbase-master/conf', 'configuration_attributes': {}, 'configurations': {u'security.masterregion.protocol.acl': u'*', u'security.admin.protocol.acl': u'*', u'security.client.protocol.acl': u'*'}}
2017-04-07 11:48:44,027 - Generating config: /usr/hdp/current/hbase-master/conf/hbase-policy.xml
2017-04-07 11:48:44,027 - File['/usr/hdp/current/hbase-master/conf/hbase-policy.xml'] {'owner': 'hbase', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-04-07 11:48:44,125 - File['/usr/hdp/current/hbase-master/conf/hbase-env.sh'] {'content': InlineTemplate(...), 'owner': 'hbase', 'group': 'hadoop'}
2017-04-07 11:48:44,191 - Writing File['/usr/hdp/current/hbase-master/conf/hbase-env.sh'] because contents don't match
2017-04-07 11:48:44,222 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2017-04-07 11:48:44,271 - File['/etc/security/limits.d/hbase.conf'] {'content': Template('hbase.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2017-04-07 11:48:44,367 - TemplateConfig['/usr/hdp/current/hbase-master/conf/hadoop-metrics2-hbase.properties'] {'owner': 'hbase', 'template_tag': 'GANGLIA-MASTER'}
2017-04-07 11:48:44,376 - File['/usr/hdp/current/hbase-master/conf/hadoop-metrics2-hbase.properties'] {'content': Template('hadoop-metrics2-hbase.properties-GANGLIA-MASTER.j2'), 'owner': 'hbase', 'group': None, 'mode': None}
2017-04-07 11:48:44,441 - Writing File['/usr/hdp/current/hbase-master/conf/hadoop-metrics2-hbase.properties'] because contents don't match
2017-04-07 11:48:44,473 - TemplateConfig['/usr/hdp/current/hbase-master/conf/regionservers'] {'owner': 'hbase', 'template_tag': None}
2017-04-07 11:48:44,476 - File['/usr/hdp/current/hbase-master/conf/regionservers'] {'content': Template('regionservers.j2'), 'owner': 'hbase', 'group': None, 'mode': None}
2017-04-07 11:48:44,554 - Directory['/var/run/hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2017-04-07 11:48:44,710 - Directory['/var/log/hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2017-04-07 11:48:44,755 - Changing owner for /var/log/hbase from 0 to hbase
2017-04-07 11:48:44,786 - Changing permission for /var/log/hbase from 777 to 755
2017-04-07 11:48:44,903 - File['/usr/hdp/current/hbase-master/conf/log4j.properties'] {'content': ..., 'owner': 'hbase', 'group': 'hadoop', 'mode': 0644}
2017-04-07 11:48:45,001 - HdfsResource['hdfs://trezastoto.titi:8020/apps/hbase/data'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://trezastoto.titi:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'hbase', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/apps/hive/warehouse', u'/mr-history/done', u'/app-logs', u'/tmp']}
2017-04-07 11:48:45,010 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://trezastoto.titi:50070/webhdfs/v1/apps/hbase/data?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/var/tmp/tmpNtsxcq 2>/var/tmp/tmpZxLNSI''] {'logoutput': None, 'quiet': False}
2017-04-07 11:48:45,105 - call returned (7, '')


Command failed after 1 tries
5 REPLIES 5
Highlighted

Re: Hbase don't restart when we install new service

Super Mentor

@Karim HAMMADI

As we see that the root cause of this error is "Connection Refused"

curl: (7) Failed connect to trezastoto.titi:50070; Connection refused

- So please make sure that the NameNode is up and running and the port 50070 is accessible on the mentioned host. Doing a telnet will help us isolating this, See if the port is accessible or not. Check if there is no Firewall issue (iptables disabled on NN)

# telnet  trezastoto.titi  50070

- Also as it is a connection refused error so it will be good to run the netstat command on the NameNode to findout if the port is actually opened or not?

# netstat -tnlpa | grep 50070

If the port is not opened then checking the NameNode log will give more idea if there is any error that caused the NN to not star successfully.

.

Highlighted

Re: Hbase don't restart when we install new service

New Contributor

We have not port starting in my NameNode

this command netstat -tnlpa | grep 50070 return no result

Highlighted

Re: Hbase don't restart when we install new service

Super Mentor

@Karim HAMMADI

As we see no output of the netstat command so the chances are high that your NameNode is not running fine. So please check the NameNode log to see if there are any errors. Try restarting the NameNode. If you see any OutOfMemory related errors there then try to increase the NameNode heap size (-Xmx) from ambari UI.

- Please share the NN log as well if you see any strange error.

/var/log/hadoop/hdfs/hadoop-hdfs-namenode-xxxx.log
/var/log/hadoop/hdfs/hadoop-hdfs-namenode-xxxx.out

.

Highlighted

Re: Hbase don't restart when we install new service

New Contributor

We have find this error : in /var/log/hadoop/hdfs/hadoop-hdfs-namenode-xxxx.log

2017-04-07 00:35:49,929 INFO namenode.NameNode (NameNode.java:startCommonServices(876)) - NameNode RPC up at: trezastoto.titi/10.22.41.30:8020 2017-04-07 00:35:49,930 INFO namenode.FSNamesystem (FSNamesystem.java:startActiveServices(1130)) - Starting services required for active state 2017-04-07 00:35:49,939 INFO blockmanagement.CacheReplicationMonitor (CacheReplicationMonitor.java:run(161)) - Starting CacheReplicationMonitor with interval 30000 milliseconds 2017-04-07 00:35:50,070 INFO ipc.Server (Server.java:logException(2401)) - IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol.versionRequest from 10.22.41.38:49838 Call#0 Retry#0 org.apache.hadoop.ipc.RetriableException: NameNode still not started at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.checkNNStartup(NameNodeRpcServer.java:2057) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.versionRequest(NameNodeRpcServer.java:1532) at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.versionRequest(DatanodeProtocolServerSideTranslatorPB.java:261) at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:29074) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307) 2017-04-07 00:35:50,070 INFO ipc.Server (Server.java:logException(2401)) - IPC Server handler 68 on 8020, call org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol.versionRequest from 10.22.41.36:59550 Call#0 Retry#0 org.apache.hadoop.ipc.RetriableException: NameNode still not started at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.checkNNStartup(NameNodeRpcServer.java:2057) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.versionRequest(NameNodeRpcServer.java:1532) at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.versionRequest(DatanodeProtocolServerSideTranslatorPB.java:261) at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:29074) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307) 2017-04-07 00:35:50,323 INFO fs.TrashPolicyDefault (TrashPolicyDefault.java:<init>(224)) - The configured checkpoint interval is 0 minutes. Using an interval of 360 minutes that is used for deletion instead 2017-04-07 00:35:51,076 INFO hdfs.StateChange (DatanodeManager.java:registerDatanode(915)) - BLOCK* registerDatanode: from DatanodeRegistration(10.22.41.37:50010, datanodeUuid=34f835e0-56b1-43b7-a767-affa96b1b26a, infoPort=50075, infoSecurePort=0, ipcPort=8010, storageInfo=lv=-56;cid=CID-5187375a-1132-4529-8ce5-fd9c3765f21c;nsid=2089368810;c=0) storage 34f835e0-56b1-43b7-a767-affa96b1b26a 2017-04-07 00:35:51,076 INFO blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(451)) - Number of failed storage changes from 0 to 0 2017-04-07 00:35:51,103 INFO net.NetworkTopology (NetworkTopology.java:add(426)) - Adding a new node: /default-rack/10.22.41.37:50010

Highlighted

Re: Hbase don't restart when we install new service

Super Mentor

@Karim HAMMADI

Your NameNode log says that it is not coming up successfully. Please notice the following error:

org.apache.hadoop.ipc.RetriableException: NameNode still not started 
      at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.checkNNStartup(NameNodeRpcServer.java:2057) 
      at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.versionRequest(NameNodeRpcServer.java:1532)

.

So you should watch the NN log carefully to understand why it is not starting successfully because the cause may be few line above to the logs snippet that you posted here.

.

Will suggest you to freshly start the namenode and capture the log from where it is starting freshly and note any WARN/ERROR that you see before the message "NameNode still not started" appear in the log.

Don't have an account?
Coming from Hortonworks? Activate your account here