Support Questions

Find answers, ask questions, and share your expertise

Who agreed with this topic

Hive lock left behind

avatar
Explorer

Hi,

we run Cloudera Express 5.7.0 on a cluster of 29 machines (2 name nodes, 22 data nodes, 5 dedicated zookeeper servers). All machines run CentOs 6.7.

We have an external table in Hive, which we use in 3 different ways:

  1. ingest new data at regular pace through the day;
  2. run queries / workflows for data mining;
  3. perform cleanup -- a procedure that triggers once per day and deletes data older than x days.

Every now and then, we notice the cleanup failing because of not being able to acquire the lock on the table. Specifically, the method is called from bash as

 

hive -e "ALTER TABLE table DROP PARTITION (...)"

 

and what we would get back is

 

Unable to acquire IMPLICIT, EXCLUSIVE lock default@table@...(partitions info) after 100 attempts. FAILED: Error in acquiring locks: Locks on the underlying objects cannot be acquired. retry after some time

 

In order to work around the issue, we restart the whole Hive service, and the problem goes away. Annoying!

I tried to debug the issue a bit, but have hard time figuring out how to continue. Since today it happened again, I thought about sharing my findings with you all and try to seek some help 🙂 I apologize in advance for the very long text.

 

- If I run show locks default.table partition (...) in hive, I see there is an active lock:

 

1 default@table@(partitions info) SHARED

2 LOCK_QUERYID:hive_20161007081717_9543c9b3-dde1-4b8d-942c-979a091b3eae

3 LOCK_TIME:1475828283643 --- Fri, 07 Oct 2016 08:18:03.643 GMT

4 LOCK_MODE:IMPLICIT

5 LOCK_QUERYSTRING: ...

 

By the look of things, the LOCK_QUERYSTRING is a query submitted via Hue by one of cluster users.

 

- in HiveHistoryServer, I can find a job, submitted at Fri Oct 07 08:19:52 UTC 2016 that runs the specific query LOCK_QUERYSTRING. Notice that it is a bit weird that submitted time is a bit later than lock_time, even though they are very close -- is it really this job? unsure. Assuming the job is correct, it is flagged as State: SUCCEEDED

 

- in Zookeeper, via zookeeper-client, I see the lock

 

[zk: localhost:2181(CONNECTED) 5] get /hive_zookeeper_namespace_hive/default/table/(partitions info)/LOCK-SHARED-0000000000

hive_20161007081717_9543c9b3-dde1-4b8d-942c-979a091b3eae:1475828283643:

IMPLICIT:

(LOCK_QUERYSTRING)

 

- I can find the machine in the Zookeeper ensemble, let us call it ZK1, which hosts the connection to the lock by running echo cons | ncat 127.0.0.1 2181

 

/10.0.0.6:38266[1](queued=0,recved=412004,sent=412078,sid=0x456d6bab545a8ef,lop=PING,est=1474622333769,to=40000,lcxid=0x4cfac,lzxid=0xffffffffffffffff,lresp=1475916179148,llat=0,minlat=0,avglat=0,maxlat=19)

 

- machine+port number, ie 10.0.0.6:38266, points to HiveServer2. No queries are running.

 

- if I take a stack trace of HiveServer2 I see, among other things,

 

Thread 59 (HiveServer2-Handler-Pool: Thread-47-EventThread):
State: WAITING
Blocked count: 3
Wtaited count: 76
Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@8e6f2b0
Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
Thread 58 (HiveServer2-Handler-Pool: Thread-47-SendThread(ZK1)):
State: RUNNABLE
Blocked count: 1925
Wtaited count: 0
Stack:
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:338)
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)

Ie, it looks like resources at HiveServer2 are doing something related to the Zookeeper machine involved in the issue.

 

- After restarting Hive, the stack trace has no such thing any more, and the cleanup succeeds.

 

I Googled a bit for epollWait and found bugs like http://bugs.java.com/view_bug.do?bug_id=6403933 , ie a bug in Java libraries could be the reason for the expected behavior. On the other hand, it seems like the bug impacts on CPU utilization, and I have no evidences of it (did not pay attention to it).

 

Given all this, do you have any hint for debugging the issue further?

Thanks and regards,

Filippo

 

Who agreed with this topic