Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

ATSv2 HBase Application and TimeLineService v2.0: /atsv2-hbase-secure/meta-region-server not created

avatar
Explorer

Hello all,

 

We had several times issues with yarn-ats, but most of them were solved by just destroying the app and restarting YARN by ambari. 

 

However, last time, we couldn't recover yarn-ats app using this method. The Timelineservice v2.0 had problems connecting to a particular node.

 

2021-01-27 13:21:36,901 INFO  client.RpcRetryingCallerImpl (RpcRetryingCallerImpl.java:callWithRetries(134)) - Call exception, tries=7, retries=7, started=8236 ms ago, cancelled=false, msg=Call to node.es/XX.XX.XX.XX:17020 failed on connection exception: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: node.es/XX.XX.XX.XX:17020, details=row 'prod.timelineservice.entity,hive!yarn-cluster!HIVE-b41f83ed-5b82-4098-b912-c36feca9049e!����\П!���������!YARN_CONTAINER!����\��!container_e87_1611702208565_0022_01_000001,99999999999999' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=node.es,17020,1611702280697, seqNum=-1

 

Destroying the app, recreating it again, etc. and we found the same issue time after time. We checked the documentation and this Community Forum and we proceeded as indicated here:

 

* Remove ats-yarn and clean zookeeper:

https://community.cloudera.com/t5/Support-Questions/ATS-hbase-does-not-seem-to-start/m-p/235162

https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.0.1/data-operating-system/content/remove_ats_hbase...

 

However, the problem persists although the errors now are different. It is annoying because it seems that yarn-ats cannot create the /atsv2-hbase-secure/meta-region-server while other files are created in the zookeeper znode.

 

2021-02-01 12:57:11,446 INFO storage.HBaseTimelineReaderImpl (HBaseTimelineReaderImpl.java:run(170)) - Running HBase liveness monitor
2021-02-01 12:57:11,448 WARN storage.HBaseTimelineReaderImpl (HBaseTimelineReaderImpl.java:run(183)) - Got failure attempting to read from timeline storage, assuming HBase down
java.io.UncheckedIOException: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the location for replica 0
at org.apache.hadoop.hbase.client.ResultScanner$1.hasNext(ResultScanner.java:55)
at org.apache.hadoop.yarn.server.timelineservice.storage.reader.TimelineEntityReader.readEntities(TimelineEntityReader.java:283)
at org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl$HBaseMonitor.run(HBaseTimelineReaderImpl.java:174)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the location for replica 0
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:332)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:153)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:269)
at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:437)
at org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:312)
at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:597)
at org.apache.hadoop.hbase.client.ResultScanner$1.hasNext(ResultScanner.java:53)
... 9 more
Caused by: java.io.IOException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /atsv2-hbase-secure/meta-region-server
at org.apache.hadoop.hbase.client.ConnectionImplementation.get(ConnectionImplementation.java:2002)
at org.apache.hadoop.hbase.client.ConnectionImplementation.locateMeta(ConnectionImplementation.java:762)
at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:729)
at org.apache.hadoop.hbase.client.ConnectionImplementation.relocateRegion(ConnectionImplementation.java:707)
at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:911)
at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:732)
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:325)
... 17 more
Caused by: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /atsv2-hbase-secure/meta-region-server
at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$ZKTask$1.exec(ReadOnlyZKClient.java:164)
at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:321)
... 1 more

 

* Furthermore, changing the zookeper.znode.parent as suggested in another thread does not help either.

 

https://community.cloudera.com/t5/Support-Questions/HDP3-0-timeline-service-V2-reader-cannot-create-...

 

The new znode directori is created but not the meta-region-server...

 

[zk: XXXX.es:2181,XXXX.es:2181,XXXX.es:2181(CONNECTED) 0] ls /atsv2-hbase-secure
[replication, rs, splitWAL, backup-masters, table-lock, flush-table-proc, master-maintenance, online-snapshot, switch, running, tokenauth, draining, hbaseid, table]
[zk: XXXX:2181,XXXX.es:2181,XXXX.es:2181(CONNECTED) 1] ls /atsv2-hbase-secure-new
[replication, rs, splitWAL, backup-masters, table-lock, flush-table-proc, master-maintenance, online-snapshot, master, switch, running, tokenauth, draining, hbaseid, table]

 

I have seen many other threads with problems with ATSv2 but our new issue seems impossible to solve for us. Why the meta-region-server is not found in the zookeeper.znode.parent? Any idea anyone?

 

We are running HDP-3.1.4 and ambari-2.7.4.0.

 

Thank you very much in advance.

 

Cheers,

 

Carles 

7 REPLIES 7

avatar
Super Collaborator

Hello @Aco 

 

Thanks for using Cloudera Community. The Post shows the MetaRegionServer ZNode is missing. 

 

Would request you to review the ATS HBase Master & ATS Region Server Logs. The Link [1] is a good link to start with. In few instances, I have observed unchecking "is_hbase_system_service_launch" or Permission issues causing such issues. The ATS HBase Logs would be right place to review here. 

 

- Smarak

 

[1] https://my.cloudera.com/knowledge/Troubleshooting-YARN-ATS-v2-Issues-ATS-HBase-Modes?id=310144

avatar
Explorer

Hello @smdas

 

Thank you for your response.

 

I can confirm that "is_hbase_system_service_launch" option is checked. Regarding the permission issues, I'm going to check it but nothing has changed in that way in our configuration. I'll continue to investigate.

 

Thank you again.


Cheers,

 

Carles

avatar
Super Collaborator

Hello @Aco 

 

Thanks for the reply. Will it be feasible for your Team to uncheck "is_hbase_system_service_launch", followed by the restart of the YARN Timeline Service & confirm the Status.

 

- Smarak

avatar
Explorer

Hello @smdas 

 

Thank you again for your quick response.

 

Doing as you indicated, the TimeLineService v2.0 is taking so long in "Starting..." mode. The errors are:

 

2021-02-03 10:59:30,683 INFO  [main] client.RpcRetryingCallerImpl: Call exception, tries=15, retries=36, started=128767 ms ago, cancelled=false, msg=org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /atsv2-hbase-secure/meta-region-server, details=row 'prod.timelineservice.entity' on table 'hbase:meta' at null
2021-02-03 10:59:50,837 INFO  [main] client.RpcRetryingCallerImpl: Call exception, tries=16, retries=36, started=148921 ms ago, cancelled=false, msg=org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /atsv2-hbase-secure/meta-region-server, details=row 'prod.timelineservice.entity' on table 'hbase:meta' at null
2021-02-03 11:00:10,843 INFO  [main] client.RpcRetryingCallerImpl: Call exception, tries=17, retries=36, started=168927 ms ago, cancelled=false, msg=org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /atsv2-hbase-secure/meta-region-server, details=row 'prod.timelineservice.entity' on table 'hbase:meta' at null
2021-02-03 11:00:30,915 INFO  [main] client.RpcRetryingCallerImpl: Call exception, tries=18, retries=36, started=188999 ms ago, cancelled=false, msg=org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /atsv2-hbase-secure/meta-region-server, details=row 'prod.timelineservice.entity' on table 'hbase:meta' at null
2021-02-03 11:00:51,112 INFO  [main] client.RpcRetryingCallerImpl: Call exception, tries=19, retries=36, started=209196 ms ago, cancelled=false, msg=org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /atsv2-hbase-secure/meta-region-server, details=row 'prod.timelineservice.entity' on table 'hbase:meta' at null
2021-02-03 11:01:11,147 INFO  [main] client.RpcRetryingCallerImpl: Call exception, tries=20, retries=36, started=229231 ms ago, cancelled=false, msg=org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /atsv2-hbase-secure/meta-region-server, details=row 'prod.timelineservice.entity' on table 'hbase:meta' at null
2021-02-03 11:01:31,214 INFO  [main] client.RpcRetryingCallerImpl: Call exception, tries=21, retries=36, started=249298 ms ago, cancelled=false, msg=org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /atsv2-hbase-secure/meta-region-server, details=row 'prod.timelineservice.entity' on table 'hbase:meta' at null

On the logs side, for the new /var/log/hadoop-yarn/embedded-yarn-ats-hbase directory, there are these messages:

 

2021-02-03 11:08:04,944 INFO [master/ambarisrv02:17000.splitLogManager..Chore.1] master.SplitLogManager: total=1, unassigned=1, tasks={/atsv2-hbase-secure/splitWAL/WALs%2Fhnode34.pic.es%2C17020%2C1606168410836-splitting%2Fhnode34.pic.es%252C17020%252C1606168410836.1611699600885=last_update = 1612346240946 last_version = 33 cur_worker_name = null status = in_progress incarnation = 2 resubmits = 1 batch = installed = 1 done = 0 error = 0}
2021-02-03 11:08:04,947 INFO [main-EventThread] coordination.SplitLogManagerCoordination: Task /atsv2-hbase-secure/splitWAL/RESCAN0000007157 entered state=DONE ambarisrv02.pic.es,17000,1612346230681
2021-02-03 11:08:05,092 INFO [PEWorker-2] client.RpcRetryingCallerImpl: Call exception, tries=16, retries=22, started=148876 ms ago, cancelled=false, msg=org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /atsv2-hbase-secure/meta-region-server, details=row 'prod.timelineservice.app_flow' on table 'hbase:meta' at null
2021-02-03 11:08:05,138 WARN [master/ambarisrv02:17000] assignment.AssignmentManager: No servers available; cannot place 1 unassigned regions.
2021-02-03 11:08:05,946 INFO [main-EventThread] coordination.SplitLogManagerCoordination: Task /atsv2-hbase-secure/splitWAL/RESCAN0000007158 entered state=DONE ambarisrv02.pic.es,17000,1612346230681
2021-02-03 11:08:06,138 WARN [master/ambarisrv02:17000] assignment.AssignmentManager: No servers available; cannot place 1 unassigned regions.
2021-02-03 11:08:06,946 INFO [main-EventThread] coordination.SplitLogManagerCoordination: Task /atsv2-hbase-secure/splitWAL/RESCAN0000007159 entered state=DONE ambarisrv02.pic.es,17000,1612346230681
2021-02-03 11:08:07,139 WARN [master/ambarisrv02:17000] assignment.AssignmentManager: No servers available; cannot place 1 unassigned regions.
2021-02-03 11:08:07,946 INFO [main-EventThread] coordination.SplitLogManagerCoordination: Task /atsv2-hbase-secure/splitWAL/RESCAN0000007160 entered state=DONE ambarisrv02.pic.es,17000,1612346230681
2021-02-03 11:08:08,139 WARN [master/ambarisrv02:17000] assignment.AssignmentManager: No servers available; cannot place 1 unassigned regions.
2021-02-03 11:08:08,243 WARN [ProcExecTimeout] procedure2.ProcedureExecutor: Worker stuck PEWorker-1(pid=3087), run time 10mins, 50.255sec
2021-02-03 11:08:08,243 WARN [ProcExecTimeout] procedure2.ProcedureExecutor: Worker stuck PEWorker-2(pid=3089), run time 10mins, 50.255sec
2021-02-03 11:08:08,946 INFO [main-EventThread] coordination.SplitLogManagerCoordination: Task /atsv2-hbase-secure/splitWAL/RESCAN0000007161 entered state=DONE ambarisrv02.pic.es,17000,1612346230681
2021-02-03 11:08:09,140 WARN [master/ambarisrv02:17000] assignment.AssignmentManager: No servers available; cannot place 1 unassigned regions.
 


And the same messages in the timelinereader logs:

 

2021-02-03 11:09:11,446 INFO storage.HBaseTimelineReaderImpl (HBaseTimelineReaderImpl.java:run(170)) - Running HBase liveness monitor
2021-02-03 11:09:11,448 WARN storage.HBaseTimelineReaderImpl (HBaseTimelineReaderImpl.java:run(183)) - Got failure attempting to read from timeline storage, assuming HBase down
java.io.UncheckedIOException: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the location for replica 0
at org.apache.hadoop.hbase.client.ResultScanner$1.hasNext(ResultScanner.java:55)
at org.apache.hadoop.yarn.server.timelineservice.storage.reader.TimelineEntityReader.readEntities(TimelineEntityReader.java:283)
at org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl$HBaseMonitor.run(HBaseTimelineReaderImpl.java:174)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the location for replica 0
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:332)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:153)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:269)
at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:437)
at org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:312)
at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:597)
at org.apache.hadoop.hbase.client.ResultScanner$1.hasNext(ResultScanner.java:53)
... 9 more
Caused by: java.io.IOException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /atsv2-hbase-secure/meta-region-server
at org.apache.hadoop.hbase.client.ConnectionImplementation.get(ConnectionImplementation.java:2002)
at org.apache.hadoop.hbase.client.ConnectionImplementation.locateMeta(ConnectionImplementation.java:762)
at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:729)
at org.apache.hadoop.hbase.client.ConnectionImplementation.relocateRegion(ConnectionImplementation.java:707)
at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:911)
at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:732)
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:325)
... 17 more
Caused by: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /atsv2-hbase-secure/meta-region-server
at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$ZKTask$1.exec(ReadOnlyZKClient.java:164)
at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:321)
... 1 more

 

I've also to kill the timelinereader start process running on the server that was hung. 

 

Recovering the old configuration (using hbase as a system service launch), the timelinereader is started but the problem persists in the logs of course.

 

Thus, in conclusion, after removing the /atv2-hbase-secure configuration on zookeeper, the yarn timelinereader is not able to create the znode of meta-region-server...

 

Thank you again.

 

Cheers,

 

Carles

avatar
Contributor

Hi,

 

If /atsv2-hbase-secure/meta-region-server is not getting created on its own then you can create it manually and set appropriate permission/ACL on the same as per your configuration. 

Destroy ats-hbase and then recreate it manually.

Restart timeline reader V2 and resource manager.

Ats-hbase in system mode gives issue. Can you try to run it in embedded mode or use external hbase.

 

Let me know if that works. I can share steps if you want to create ats-hbase manually or use external hbase

avatar
Explorer

Hi @AmirMirza 

 

Thank you very much for your response. 

 

If I compare with another cluster /atsv2-hbase-secure info, I see:

 

[zk: xxx.xxx.es:2181,yyy.yyy.es:2181,zzz.zzz.es:2181(CONNECTED) 10] ls /atsv2-hbase-secure 
[replication, meta-region-server, rs, splitWAL, backup-masters, table-lock, flush-table-proc, master-maintenance, online-snapshot, acl, switch, running, tokenauth, draining, namespace, hbaseid, table]


[zk: xxx.xxx.es:2181,yyy.yyy.es:2181,zzz.zzz.es:2181(CONNECTED) 11] getAcl /atsv2-hbase-secure
'sasl,'yarn
: cdrwa
'world,'anyone
: r
'sasl,'yarn-ats-hbase
: cdrwa

 

While on the failing server:

 

[zk: aaa.aaa.es:2181,bbb.bbb.es:2181,ccc.ccc.es:2181(CONNECTED) 0] ls /atsv2-hbase-secure
[replication, rs, splitWAL, backup-masters, table-lock, flush-table-proc, master-maintenance, online-snapshot, switch, running, tokenauth, draining, hbaseid, table]


 [zk: aaa.aaa.es:2181,bbb.bbb.es:2181,ccc.ccc.es:2181(CONNECTED) 1] getAcl /atsv2-hbase-secure
'sasl,'yarn
: cdrwa
'world,'anyone
: r
'sasl,'yarn-ats-hbase
: cdrwa

 There are several "files" missing:

 

meta-region-server

acl

namespace

 

Do you think that we can recreate them manually? I'm not use to work with zookeeper and I do not know how to proceed. Anyway, why they are not created automatically?

 

Best regards,

 

Carles

avatar
Contributor

Hi @Aco 

 

Yes, you can create it manually. Check the zookeeper document on how to create those directories. It happens due to insufficient permissions. You need to create the directories from zookeeper cli and set appropriate permission, rest of the data and file creation will be taken care by zookeeper.

 

I will see if I could find the exact steps to share it with you