Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Hive add partition MetaException

avatar
Contributor

Hi everyone!

Im facing with the following issue after upgrade from HDP2.2 to 2.4 and then to 2.5.0.0 on cluster in secure mode:

When I try to add partition to the table hive fails with error:

hive> !whoami;
tech_gainfo_bgd_ms
hive> !klist;
Ticket cache: FILE:/tmp/krb5cc_927597973
Default principal: tech_gainfo_bgd_ms@COMPANY.RU


Valid starting     Expires            Service principal
10/27/16 13:47:56  10/27/16 23:47:56  krbtgt/COMPANY.RU@COMPANY.RU
	renew until 11/03/16 05:47:56
hive> alter table uss.test_tab add if not exists partition (b='i');
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:MetaException(message:Got exception: org.apache.hadoop.security.AccessControlException Permission denied: user=tech_biisfct_bgd_ms, access=EXECUTE, inode="/apps/hive/warehouse/uss.db/test_tab/b=i":tech_gainfo_bgd_ms:bgd_gainfo_prod_ms:drwxr-x---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:259)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1827)
	at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:108)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3972)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1130)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:851)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
))
hive> !date
    > ;
Thu Oct 27 16:20:33 MSK 2016

For some reasons in one moment hive decides that I'm tech_biisfct_bgd_ms, but in output of whoami and klist we can see that I am tech_gainfo_bgd_ms (and that's right).

It happens only when I try to add partition only through hive CLI. When in hive CLI I try to create or drop table or even drop partition it works normally. It also works normally create/drop table/partition operations invoked through hiveserver2 (from beeline CLI for example).

In the moment of error in hivemetastore.log I see these rows:

2016-10-27 16:32:20,289 INFO  [pool-7-thread-166]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(822)) - 113: get_table : db=uss tbl=test_tab
2016-10-27 16:32:20,289 INFO  [pool-7-thread-166]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(391)) - ugi=tech_gainfo_bgd_ms@COMPANY.RU	ip=192.168.147.125	cmd=get_table : db=uss tbl=test_tab	
2016-10-27 16:32:20,330 INFO  [pool-7-thread-166]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(822)) - 113: add_partitions
2016-10-27 16:32:20,330 INFO  [pool-7-thread-166]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(391)) - ugi=tech_gainfo_bgd_ms@COMPANY.RU	ip=192.168.147.125	cmd=add_partitions	
2016-10-27 16:32:20,373 WARN  [HMSHandler #7]: retry.RetryInvocationHandler (RetryInvocationHandler.java:handleException(217)) - Exception while invoking ClientNamenodeProtocolTranslatorPB.getFileInfo over hd-name006.vimpelcom.ru/10.31.160.249:8020. Not retrying because try once and fail.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=tech_biisfct_bgd_ms, access=EXECUTE, inode="/apps/hive/warehouse/uss.db/test_tab/b=i":tech_gainfo_bgd_ms:bgd_gainfo_prod_ms:drwxr-x---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:259)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1827)
	at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:108)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3972)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1130)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:851)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)


	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552)
	at org.apache.hadoop.ipc.Client.call(Client.java:1496)
	at org.apache.hadoop.ipc.Client.call(Client.java:1396)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
	at com.sun.proxy.$Proxy21.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:816)
	at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
	at com.sun.proxy.$Proxy22.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2158)
	at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1423)
	at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1419)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1419)
	at org.apache.hadoop.hive.metastore.Warehouse.isDir(Warehouse.java:475)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createLocationForAddedPartition(HiveMetaStore.java:2529)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.access$600(HiveMetaStore.java:310)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$10.call(HiveMetaStore.java:2277)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$10.call(HiveMetaStore.java:2274)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)
2016-10-27 16:32:20,373 ERROR [HMSHandler #7]: hive.log (MetaStoreUtils.java:logAndThrowMetaException(1239)) - Got exception: org.apache.hadoop.security.AccessControlException Permission denied: user=tech_biisfct_bgd_ms, access=EXECUTE, inode="/apps/hive/warehouse/uss.db/test_tab/b=i":tech_gainfo_bgd_ms:bgd_gainfo_prod_ms:drwxr-x---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:259)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1827)
	at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:108)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3972)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1130)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:851)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)


org.apache.hadoop.security.AccessControlException: Permission denied: user=tech_biisfct_bgd_ms, access=EXECUTE, inode="/apps/hive/warehouse/uss.db/test_tab/b=i":tech_gainfo_bgd_ms:bgd_gainfo_prod_ms:drwxr-x---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:259)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1827)
	at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:108)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3972)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1130)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:851)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)


	at sun.reflect.GeneratedConstructorAccessor83.newInstance(Unknown Source)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
	at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2160)
	at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1423)
	at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1419)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1419)
	at org.apache.hadoop.hive.metastore.Warehouse.isDir(Warehouse.java:475)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createLocationForAddedPartition(HiveMetaStore.java:2529)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.access$600(HiveMetaStore.java:310)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$10.call(HiveMetaStore.java:2277)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$10.call(HiveMetaStore.java:2274)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=tech_biisfct_bgd_ms, access=EXECUTE, inode="/apps/hive/warehouse/uss.db/test_tab/b=i":tech_gainfo_bgd_ms:bgd_gainfo_prod_ms:drwxr-x---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:259)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1827)
	at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:108)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3972)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1130)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:851)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)


	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552)
	at org.apache.hadoop.ipc.Client.call(Client.java:1496)
	at org.apache.hadoop.ipc.Client.call(Client.java:1396)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
	at com.sun.proxy.$Proxy21.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:816)
	at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
	at com.sun.proxy.$Proxy22.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2158)
	... 13 more
2016-10-27 16:32:20,374 ERROR [HMSHandler #7]: hive.log (MetaStoreUtils.java:logAndThrowMetaException(1240)) - Converting exception to MetaException
2016-10-27 16:32:20,376 ERROR [pool-7-thread-166]: metastore.RetryingHMSHandler (RetryingHMSHandler.java:invokeInternal(195)) - MetaException(message:MetaException(message:Got exception: org.apache.hadoop.security.AccessControlException Permission denied: user=tech_biisfct_bgd_ms, access=EXECUTE, inode="/apps/hive/warehouse/uss.db/test_tab/b=i":tech_gainfo_bgd_ms:bgd_gainfo_prod_ms:drwxr-x---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:259)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1827)
	at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:108)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3972)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1130)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:851)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
))
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.add_partitions_core(HiveMetaStore.java:2301)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.add_partitions_req(HiveMetaStore.java:2338)
	at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:139)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:97)
	at com.sun.proxy.$Proxy15.add_partitions_req(Unknown Source)
	at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$add_partitions_req.getResult(ThriftHiveMetastore.java:9771)
	at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$add_partitions_req.getResult(ThriftHiveMetastore.java:9755)
	at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
	at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
	at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:551)
	at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:546)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
	at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:546)
	at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)

Any ideas?

Will be grateful for any help.

1 ACCEPTED SOLUTION

avatar
Contributor

After almost a week, we decided to downgrade hive version from HDP-2.5.0.0-tag to HDP-2.4.4.0-tag.

As I can explain this is because of commit: https://github.com/hortonworks/hive-release/commit/059f15fe55977246da1e432cf18202b48789775d#diff-e5e...

This commit was produced to improve speed on add partition operation. But it leads to thread-unsafe side-effects on the high-loaded cluster with many add partition operations.

Unfortunately I can't find what exactly incorrect in this code or maybe it needs especial set of configuration parameters.

Anyway to downgrade hive version we simply redirected all symbol links for hive* in /usr/hdp to hive HDP-2.4.4.0-tag directories, replaced hive jars in /user/oozie/share and restart all hive and oozie services.

After that all services work normally.

View solution in original post

5 REPLIES 5

avatar
Expert Contributor

@Roman Boyko

This seems to be a Hive metastore issue and it looks like the one of the user who was accessing the mestastore before is holding a lock on the resource. Please work with Hortonworks Support/Engineering to get this issue resolved.

avatar
Contributor

@Hari Rongali Thank you for response, but I don't think this is because of locking the resource. It's more like wrong user definition in the createLocationForAddedPartition method's stacktrace.

avatar

Can you run a create table in the same session and do a ls on the HDFS directory to see who is the owner of the table directory? One possible explanation of the user mismatch could be the custom user mapping defined in hadoop.security.auth_to_local property in /etc/hadoop/conf/core-site.xml. Can you check that?

avatar
Contributor

@Deepesh Unfortanly this seem to be another problem. Table creates with the right owner and looks normaly.

In hadoop.security.auth_to_local property there are only tech users (responsible for hadoop services) defined.

Anyway thank you very much for reply!

avatar
Contributor

After almost a week, we decided to downgrade hive version from HDP-2.5.0.0-tag to HDP-2.4.4.0-tag.

As I can explain this is because of commit: https://github.com/hortonworks/hive-release/commit/059f15fe55977246da1e432cf18202b48789775d#diff-e5e...

This commit was produced to improve speed on add partition operation. But it leads to thread-unsafe side-effects on the high-loaded cluster with many add partition operations.

Unfortunately I can't find what exactly incorrect in this code or maybe it needs especial set of configuration parameters.

Anyway to downgrade hive version we simply redirected all symbol links for hive* in /usr/hdp to hive HDP-2.4.4.0-tag directories, replaced hive jars in /user/oozie/share and restart all hive and oozie services.

After that all services work normally.