<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Hive add partition MetaException in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Hive-add-partition-MetaException/m-p/152827#M44678</link>
    <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/267/hrongali.html" nodeid="267"&gt;@Hari Rongali&lt;/A&gt; Thank you for response, but I don't think this is because of locking the resource. It's more like wrong user definition in the createLocationForAddedPartition method's stacktrace.&lt;/P&gt;</description>
    <pubDate>Fri, 28 Oct 2016 17:02:35 GMT</pubDate>
    <dc:creator>ro_v_boyko</dc:creator>
    <dc:date>2016-10-28T17:02:35Z</dc:date>
    <item>
      <title>Hive add partition MetaException</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Hive-add-partition-MetaException/m-p/152825#M44676</link>
      <description>&lt;P&gt;Hi everyone!&lt;/P&gt;&lt;P&gt;Im facing with the following issue after upgrade from HDP2.2 to 2.4 and then to 2.5.0.0 on cluster in secure mode:&lt;/P&gt;&lt;P&gt;When I try to add partition to the table hive fails with error:&lt;/P&gt;&lt;PRE&gt;hive&amp;gt; !whoami;
tech_gainfo_bgd_ms
hive&amp;gt; !klist;
Ticket cache: FILE:/tmp/krb5cc_927597973
Default principal: tech_gainfo_bgd_ms@COMPANY.RU


Valid starting     Expires            Service principal
10/27/16 13:47:56  10/27/16 23:47:56  krbtgt/COMPANY.RU@COMPANY.RU
	renew until 11/03/16 05:47:56
hive&amp;gt; alter table uss.test_tab add if not exists partition (b='i');
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:MetaException(message:Got exception: org.apache.hadoop.security.AccessControlException Permission denied: user=tech_biisfct_bgd_ms, access=EXECUTE, inode="/apps/hive/warehouse/uss.db/test_tab/b=i":tech_gainfo_bgd_ms:bgd_gainfo_prod_ms:drwxr-x---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:259)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1827)
	at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:108)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3972)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1130)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:851)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
))
hive&amp;gt; !date
    &amp;gt; ;
Thu Oct 27 16:20:33 MSK 2016
&lt;/PRE&gt;&lt;P&gt;For some reasons in one moment hive decides that I'm tech_biisfct_bgd_ms, but in output of whoami and klist we can see that I am tech_gainfo_bgd_ms (and that's right).&lt;/P&gt;&lt;P&gt;It happens only when I try to add partition only through hive CLI. When in hive CLI I try to create or drop table or even drop partition it works normally. It also works normally create/drop table/partition operations invoked through hiveserver2 (from beeline CLI for example).&lt;/P&gt;&lt;P&gt;In the moment of error in hivemetastore.log I see these rows:&lt;/P&gt;&lt;PRE&gt;2016-10-27 16:32:20,289 INFO  [pool-7-thread-166]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(822)) - 113: get_table : db=uss tbl=test_tab
2016-10-27 16:32:20,289 INFO  [pool-7-thread-166]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(391)) - ugi=tech_gainfo_bgd_ms@COMPANY.RU	ip=192.168.147.125	cmd=get_table : db=uss tbl=test_tab	
2016-10-27 16:32:20,330 INFO  [pool-7-thread-166]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(822)) - 113: add_partitions
2016-10-27 16:32:20,330 INFO  [pool-7-thread-166]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(391)) - ugi=tech_gainfo_bgd_ms@COMPANY.RU	ip=192.168.147.125	cmd=add_partitions	
2016-10-27 16:32:20,373 WARN  [HMSHandler #7]: retry.RetryInvocationHandler (RetryInvocationHandler.java:handleException(217)) - Exception while invoking ClientNamenodeProtocolTranslatorPB.getFileInfo over hd-name006.vimpelcom.ru/10.31.160.249:8020. Not retrying because try once and fail.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=tech_biisfct_bgd_ms, access=EXECUTE, inode="/apps/hive/warehouse/uss.db/test_tab/b=i":tech_gainfo_bgd_ms:bgd_gainfo_prod_ms:drwxr-x---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:259)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1827)
	at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:108)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3972)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1130)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:851)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)


	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552)
	at org.apache.hadoop.ipc.Client.call(Client.java:1496)
	at org.apache.hadoop.ipc.Client.call(Client.java:1396)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
	at com.sun.proxy.$Proxy21.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:816)
	at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
	at com.sun.proxy.$Proxy22.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2158)
	at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1423)
	at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1419)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1419)
	at org.apache.hadoop.hive.metastore.Warehouse.isDir(Warehouse.java:475)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createLocationForAddedPartition(HiveMetaStore.java:2529)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.access$600(HiveMetaStore.java:310)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$10.call(HiveMetaStore.java:2277)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$10.call(HiveMetaStore.java:2274)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)
2016-10-27 16:32:20,373 ERROR [HMSHandler #7]: hive.log (MetaStoreUtils.java:logAndThrowMetaException(1239)) - Got exception: org.apache.hadoop.security.AccessControlException Permission denied: user=tech_biisfct_bgd_ms, access=EXECUTE, inode="/apps/hive/warehouse/uss.db/test_tab/b=i":tech_gainfo_bgd_ms:bgd_gainfo_prod_ms:drwxr-x---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:259)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1827)
	at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:108)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3972)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1130)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:851)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)


org.apache.hadoop.security.AccessControlException: Permission denied: user=tech_biisfct_bgd_ms, access=EXECUTE, inode="/apps/hive/warehouse/uss.db/test_tab/b=i":tech_gainfo_bgd_ms:bgd_gainfo_prod_ms:drwxr-x---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:259)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1827)
	at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:108)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3972)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1130)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:851)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)


	at sun.reflect.GeneratedConstructorAccessor83.newInstance(Unknown Source)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
	at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2160)
	at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1423)
	at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1419)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1419)
	at org.apache.hadoop.hive.metastore.Warehouse.isDir(Warehouse.java:475)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createLocationForAddedPartition(HiveMetaStore.java:2529)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.access$600(HiveMetaStore.java:310)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$10.call(HiveMetaStore.java:2277)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$10.call(HiveMetaStore.java:2274)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=tech_biisfct_bgd_ms, access=EXECUTE, inode="/apps/hive/warehouse/uss.db/test_tab/b=i":tech_gainfo_bgd_ms:bgd_gainfo_prod_ms:drwxr-x---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:259)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1827)
	at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:108)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3972)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1130)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:851)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)


	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552)
	at org.apache.hadoop.ipc.Client.call(Client.java:1496)
	at org.apache.hadoop.ipc.Client.call(Client.java:1396)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
	at com.sun.proxy.$Proxy21.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:816)
	at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
	at com.sun.proxy.$Proxy22.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2158)
	... 13 more
2016-10-27 16:32:20,374 ERROR [HMSHandler #7]: hive.log (MetaStoreUtils.java:logAndThrowMetaException(1240)) - Converting exception to MetaException
2016-10-27 16:32:20,376 ERROR [pool-7-thread-166]: metastore.RetryingHMSHandler (RetryingHMSHandler.java:invokeInternal(195)) - MetaException(message:MetaException(message:Got exception: org.apache.hadoop.security.AccessControlException Permission denied: user=tech_biisfct_bgd_ms, access=EXECUTE, inode="/apps/hive/warehouse/uss.db/test_tab/b=i":tech_gainfo_bgd_ms:bgd_gainfo_prod_ms:drwxr-x---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:259)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1827)
	at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:108)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3972)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1130)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:851)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
))
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.add_partitions_core(HiveMetaStore.java:2301)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.add_partitions_req(HiveMetaStore.java:2338)
	at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:139)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:97)
	at com.sun.proxy.$Proxy15.add_partitions_req(Unknown Source)
	at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$add_partitions_req.getResult(ThriftHiveMetastore.java:9771)
	at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$add_partitions_req.getResult(ThriftHiveMetastore.java:9755)
	at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
	at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
	at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:551)
	at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:546)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
	at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:546)
	at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)
&lt;/PRE&gt;&lt;P&gt;Any ideas?&lt;/P&gt;&lt;P&gt;Will be grateful for any help.&lt;/P&gt;</description>
      <pubDate>Thu, 27 Oct 2016 21:01:27 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Hive-add-partition-MetaException/m-p/152825#M44676</guid>
      <dc:creator>ro_v_boyko</dc:creator>
      <dc:date>2016-10-27T21:01:27Z</dc:date>
    </item>
    <item>
      <title>Re: Hive add partition MetaException</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Hive-add-partition-MetaException/m-p/152826#M44677</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/2582/rovboyko.html" nodeid="2582"&gt;@Roman Boyko&lt;/A&gt;&lt;/P&gt;&lt;P&gt;This seems to be a Hive metastore issue and it looks like the one of the user who was accessing the mestastore before is holding a lock on the resource. Please work with Hortonworks Support/Engineering to get this issue resolved.&lt;/P&gt;</description>
      <pubDate>Thu, 27 Oct 2016 21:27:35 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Hive-add-partition-MetaException/m-p/152826#M44677</guid>
      <dc:creator>hrongali</dc:creator>
      <dc:date>2016-10-27T21:27:35Z</dc:date>
    </item>
    <item>
      <title>Re: Hive add partition MetaException</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Hive-add-partition-MetaException/m-p/152827#M44678</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/267/hrongali.html" nodeid="267"&gt;@Hari Rongali&lt;/A&gt; Thank you for response, but I don't think this is because of locking the resource. It's more like wrong user definition in the createLocationForAddedPartition method's stacktrace.&lt;/P&gt;</description>
      <pubDate>Fri, 28 Oct 2016 17:02:35 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Hive-add-partition-MetaException/m-p/152827#M44678</guid>
      <dc:creator>ro_v_boyko</dc:creator>
      <dc:date>2016-10-28T17:02:35Z</dc:date>
    </item>
    <item>
      <title>Re: Hive add partition MetaException</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Hive-add-partition-MetaException/m-p/152828#M44679</link>
      <description>&lt;P&gt;Can you run a create table in the same session and do a ls on the HDFS directory to see who is the owner of the table directory? One possible explanation of the user mismatch could be the custom user mapping defined in hadoop.security.auth_to_local property in /etc/hadoop/conf/core-site.xml. Can you check that?&lt;/P&gt;</description>
      <pubDate>Sat, 29 Oct 2016 02:44:46 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Hive-add-partition-MetaException/m-p/152828#M44679</guid>
      <dc:creator>deepesh1</dc:creator>
      <dc:date>2016-10-29T02:44:46Z</dc:date>
    </item>
    <item>
      <title>Re: Hive add partition MetaException</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Hive-add-partition-MetaException/m-p/152829#M44680</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/222/deepesh.html" nodeid="222"&gt;@Deepesh&lt;/A&gt; Unfortanly this seem to be another problem. Table creates with the right owner and looks normaly.&lt;/P&gt;&lt;P&gt;In hadoop.security.auth_to_local property there are only tech users (responsible for hadoop services) defined. &lt;/P&gt;&lt;P&gt;Anyway thank you very much for reply!&lt;/P&gt;</description>
      <pubDate>Mon, 31 Oct 2016 22:38:08 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Hive-add-partition-MetaException/m-p/152829#M44680</guid>
      <dc:creator>ro_v_boyko</dc:creator>
      <dc:date>2016-10-31T22:38:08Z</dc:date>
    </item>
    <item>
      <title>Re: Hive add partition MetaException</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Hive-add-partition-MetaException/m-p/152830#M44681</link>
      <description>&lt;P&gt;After almost a week, we decided to downgrade hive version from HDP-2.5.0.0-tag to HDP-2.4.4.0-tag.&lt;/P&gt;&lt;P&gt;As I can explain this is because of commit: &lt;A href="https://github.com/hortonworks/hive-release/commit/059f15fe55977246da1e432cf18202b48789775d#diff-e5e61d7b4ede0525d16c819ae6d8f5c5" target="_blank"&gt;https://github.com/hortonworks/hive-release/commit/059f15fe55977246da1e432cf18202b48789775d#diff-e5e61d7b4ede0525d16c819ae6d8f5c5&lt;/A&gt;&lt;/P&gt;&lt;P&gt;This commit was produced to improve speed on add partition operation. But it leads to thread-unsafe side-effects on the high-loaded cluster with many add partition operations.&lt;/P&gt;&lt;P&gt;Unfortunately I can't find what exactly incorrect in this code or maybe it needs especial set of configuration parameters.&lt;/P&gt;&lt;P&gt;Anyway to downgrade hive version we simply redirected all symbol links for hive* in /usr/hdp to hive HDP-2.4.4.0-tag directories, replaced hive jars in /user/oozie/share and restart all hive and oozie services.&lt;/P&gt;&lt;P&gt;After that all services work normally.&lt;/P&gt;</description>
      <pubDate>Thu, 03 Nov 2016 13:45:13 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Hive-add-partition-MetaException/m-p/152830#M44681</guid>
      <dc:creator>ro_v_boyko</dc:creator>
      <dc:date>2016-11-03T13:45:13Z</dc:date>
    </item>
  </channel>
</rss>

