Support Questions

Find answers, ask questions, and share your expertise
Celebrating as our community reaches 100,000 members! Thank you!

ATS hbase does not seem to start

Expert Contributor


I installed a new (not an update) HDP 3.0.1 and seem to have many issues with the timeline server.

1) The first weird thing is that the Yarn tab in ambari keeps showing this error:

ATSv2 HBase Application The HBase application reported a 'STARTED' state. Check took 2.125s

2) The second issue seems to be with oozie. Running a job, it starts but stalls with the following log repeated hundreds of times

2018-11-01 11:15:37,842 INFO [Thread-82] org.apache.hadoop.yarn.event.AsyncDispatcher: Waiting for AsyncDispatcher to drain. Thread state is :WAITING

Then with:

2018-11-01 11:15:37,888 ERROR [Job ATS Event Dispatcher] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Exception while publishing configs on JOB_SUBMITTED Event  for the job : job_1541066376053_0066
org.apache.hadoop.yarn.exceptions.YarnException: Failed while publishing entity
	at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl$TimelineEntityDispatcher.dispatchEntities(
	at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl.putEntities(
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.publishConfigsOnJobSubmittedEvent(
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.processEventForNewTimelineService(
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleTimelineEvent(
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.access$1200(
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$ForwardingEventHandler.handle(
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$ForwardingEventHandler.handle(
	at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(
	at org.apache.hadoop.yarn.event.AsyncDispatcher$
Caused by: com.sun.jersey.api.client.ClientHandlerException: Read timed out

3) In hadoop-yarn-timelineserver-${hostname}.log

I see, repeated many times:

2018-11-01 11:32:47,715 WARN timeline.EntityGroupFSTimelineStore ( - Error putting entity: dag_1541066376053_0144_2 (TEZ_DAG_ID): 6

4) In hadoop-yarn-timelinereader-${hostname}.log

I see, repeated many times:

Thu Nov 01 11:34:10 CET 2018, RpcRetryingCaller{globalStartTime=1541068444076, pause=1000, maxAttempts=4}, Call to /192.168.x.x:17020 failed on connection exception:$AnnotatedConnectException: Connection refused: /192.168.x.x:17020
        at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(
        at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$
        ... 3 more
Caused by: Call to /192.168.x.x:17020 failed on connection exception:$AnnotatedConnectException: Connection refused: /192.168.x.x:17020
        at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(

and indeed, there is nothing listening to port 17020 on 192.168.x.x.

5) I cannot find on any server a process named ats-hbase, this might be the reason for everything else.

The queue set is just yarn_hbase_system_service_queue_name=default, which has no limit which would prevent Hbase to start.

I am sure that something is very wrong here, and any help would be appreciated.



I see the same Ambari alert "The HBase application reported a 'STARTED' state".

Why dose Ambari alert on "STARTED" state?

Expert Contributor

Eventually, after a restart of everything (not only the services seen as requiring a restart) it went OK.

New Contributor

Same problem here. "The HBase application reported a 'STARTED' state". Still there after restarting the cluster. Should we care about it?


same thing here ,even after restart everything the "The HBase application reported a 'STARTED' state" is there.


hi Geoffrey, I tried by the problem still there .though it's not a big problem for my yarn application .

Expert Contributor

It worked for me eventually after cleaning up *everything*:

- destroying the app and cleaning hdfs as explained there:

- cleaning zookeeper:

zookeeper-client rmr /atsv2-hbase-unsecure

and finally restarting *all* yarn services from ambari should did the trick.


@Guillaume Roger Thanks for providing a solution. Is this safe to do on a running cluster? Will it cause any loss of data?

Expert Contributor

You will lose some job history, but nothing else and certainly no data, so it should not be an issue.


This did not make any difference. I still get this critical alert:

The HBase application reported a 'STARTED' state.