Support Questions

Find answers, ask questions, and share your expertise

UNLOCK TABLE if lock exists in HQL file

avatar
Expert Contributor

I have INSERT OVERWRITE queries in HQL file which sometimes do not get the required locks because an end user could be querying data in the same table.

 

The scheduled query just fails in such cases breaking the workflow. Is there a way to fix this?

 

if I just UNLOCK TABLE everytime, it results in an error that lock does not exist. Can I unlock it if the lock exists so that my scheduled query is always successful?

 

 

Heart beat
Heart beat
Heart beat
Heart beat
6712698 [main] ERROR org.apache.hadoop.hive.ql.Driver  - FAILED: Error in acquiring locks: Locks on the underlying objects cannot be acquired. retry after some time
org.apache.hadoop.hive.ql.lockmgr.LockException: Locks on the underlying objects cannot be acquired. retry after some time
	at org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager.acquireLocks(DummyTxnManager.java:164)
	at org.apache.hadoop.hive.ql.Driver.acquireLocksAndOpenTxn(Driver.java:1025)
	at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1301)
	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1120)
	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1108)
	at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:218)
	at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:170)
	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:381)
	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:316)
	at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:414)
	at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:430)
	at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:724)
	at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:691)
	at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:626)
	at org.apache.oozie.action.hadoop.HiveMain.runHive(HiveMain.java:325)
	at org.apache.oozie.action.hadoop.HiveMain.run(HiveMain.java:302)
	at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:49)
	at org.apache.oozie.action.hadoop.HiveMain.main(HiveMain.java:69)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:236)
	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

  

1 REPLY 1

avatar
Super Collaborator

Well, the lock is stored in zookeeper. So you can search in zookeeper if the lock exists and delete it if yes.

But I would advise you not to do so. Locks exist for data integrity. If you remove them while you should not it could lead to some "odd results".

 

Maybe you could add some error management in the workflow in order to "retry X times" before failing the whole workflow ?

You could also better "communicate" with the users in order to reduce the likely hood of having your scheduled queries running concurrently with "users" queries.