<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question unable to insert overwrite partitioned hive table in cdh6.2.1 (with kerberos and sentry enabled) in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/unable-to-insert-overwrite-partitioned-hive-table-in-cdh6-2/m-p/313526#M225574</link>
    <description>&lt;P&gt;&lt;STRONG&gt;Issue summary:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;In cdh6.2.1，with kerberos and sentry enabled, we are getting issues when using statement "insert overwrite" to insert data into new partitions of parttioned table, the sql statement, and detailed metastore and hiveserver2 logs are attached.&lt;/P&gt;
&lt;P&gt;Note:&lt;BR /&gt;1. Event though errors are throwed (and show partions xxx will not dispaly the new partition), the underneath hdfs directory and files for the corresponding partition are created successfully.&lt;BR /&gt;2. after errors throwed for the "insert overwrite" statement, we can use msck repair tablexxx to fix the hive metastore data for the talbe, and after that, we can use "show partitions" to dispaly the new created partition successfully, and use "select xx" to query the new inserted partition data successfully:&lt;BR /&gt;3. This happens for both static mode partitioning and dynamic mode partitoning；( as long as you are inserting data to new partitions）&lt;BR /&gt;4. if you are using insert overwrite to insert data into an existing partion (the partition can be either empty or not empty, this does matter), there will not be any issue；&lt;BR /&gt;5. if you are using insert into to insert data, then there are no problems;&lt;BR /&gt;6. if you are using non partioned table, then both insert overwrite and insert into have no problem;&lt;/P&gt;
&lt;P&gt;currently, we are manually creating needed partitions before executing“insert overwite”to overcome this (like alter table test0317 add partition (ptdate=10);). But this is not a logn-term solution for this. Please help.&lt;/P&gt;
&lt;P&gt;===================&lt;BR /&gt;&lt;STRONG&gt;the sql statements we used:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;use apollo_ods_jzfix;&lt;/P&gt;
&lt;P&gt;create table test0317&lt;BR /&gt;(user_code decimal(10),&lt;BR /&gt;account decimal(19)&lt;BR /&gt;)&lt;BR /&gt;partitioned by(ptdate string)&lt;BR /&gt;stored as parquet;&lt;/P&gt;
&lt;P&gt;set hive.exec.dynamic.partition.mode=nonstrict;&lt;/P&gt;
&lt;P&gt;insert overwrite table test0317 partition(ptdate = "10")&lt;BR /&gt;select * from(&lt;BR /&gt;select 2 as user_code, 3 as account)a;&lt;/P&gt;
&lt;P&gt;insert overwrite table test0317 partition(ptdate)&lt;BR /&gt;select * from (&lt;BR /&gt;select 1 as user_code,3 as account,"8" as ptdate&lt;BR /&gt;union all&lt;BR /&gt;select 1 as user_code,3 as account,"9" as ptdate&lt;BR /&gt;) a;&lt;/P&gt;
&lt;P&gt;=========================&lt;BR /&gt;&lt;STRONG&gt;The client side error log:(beeline)&lt;/STRONG&gt;&lt;BR /&gt;INFO : Loading data to table apollo_ods_jzfix.test0317 partition (ptdate=1) from hdfs://dev-dw-nn01:8020/user/hive/warehouse/apollo_ods_jzfix.db/test0317/ptdate=1/.hive-staging_hive_2021-03-17_15-09-13_232_1543365768355672834-7333/-ext-10000&lt;BR /&gt;ERROR : FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask. org.apache.thrift.TApplicationException: Internal error processing fire_listener_event&lt;BR /&gt;INFO : MapReduce Jobs Launched:&lt;BR /&gt;INFO : Stage-Stage-1: Map: 1 Cumulative CPU: 2.56 sec HDFS Read: 5344 HDFS Write: 609 HDFS EC Read: 0 SUCCESS&lt;BR /&gt;INFO : Total MapReduce CPU Time Spent: 2 seconds 560 msec&lt;BR /&gt;INFO : Completed executing command(queryId=hive_20210317150913_9d734c54-f0cf-4dc7-9117-bc7f59c2cb61); Time taken: 17.758 seconds&lt;BR /&gt;Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask. org.apache.thrift.TApplicationException: Internal error processing fire_listener_event (state=08S01,code=1)&lt;BR /&gt;0: jdbc:hive2://dev-dw-nn01:10000/&amp;gt;&lt;/P&gt;
&lt;P&gt;===============&lt;BR /&gt;&lt;STRONG&gt;the hive metastore error log:&lt;/STRONG&gt;&lt;BR /&gt;2021-03-17 15:09:30,039 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-122]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_table : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:09:30,044 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-122]: 121: get_partition : db=apollo_ods_jzfix tbl=test0317[1]&lt;BR /&gt;2021-03-17 15:09:30,044 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-122]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_partition : db=apollo_ods_jzfix tbl=test0317[1]&lt;BR /&gt;2021-03-17 15:09:30,053 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-9-thread-122]: NoSuchObjectException(message:partition values=[1])&lt;BR /&gt;at org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:2003)&lt;BR /&gt;at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)&lt;BR /&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)&lt;BR /&gt;at java.lang.reflect.Method.invoke(Method.java:498)&lt;BR /&gt;at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101)&lt;BR /&gt;at com.sun.proxy.$Proxy26.getPartition(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:3553)&lt;BR /&gt;at org.apache.hadoop.hive.metastore.events.InsertEvent.&amp;lt;init&amp;gt;(InsertEvent.java:62)&lt;BR /&gt;at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.fire_listener_event(HiveMetaStore.java:6737)&lt;BR /&gt;at sun.reflect.GeneratedMethodAccessor100.invoke(Unknown Source)&lt;BR /&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)&lt;BR /&gt;at java.lang.reflect.Method.invoke(Method.java:498)&lt;BR /&gt;at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140)&lt;BR /&gt;at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99)&lt;BR /&gt;at com.sun.proxy.$Proxy28.fire_listener_event(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$fire_listener_event.getResult(ThriftHiveMetastore.java:14208)&lt;BR /&gt;at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$fire_listener_event.getResult(ThriftHiveMetastore.java:14193)&lt;BR /&gt;at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)&lt;BR /&gt;at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)&lt;BR /&gt;at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:594)&lt;BR /&gt;at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:589)&lt;BR /&gt;at java.security.AccessController.doPrivileged(Native Method)&lt;BR /&gt;at javax.security.auth.Subject.doAs(Subject.java:422)&lt;BR /&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)&lt;BR /&gt;at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:589)&lt;BR /&gt;at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)&lt;BR /&gt;at java.lang.Thread.run(Thread.java:748)&lt;/P&gt;
&lt;P&gt;2021-03-17 15:09:30,054 ERROR org.apache.thrift.ProcessFunction: [pool-9-thread-122]: Internal error processing fire_listener_event&lt;BR /&gt;org.apache.hadoop.hive.metastore.api.NoSuchObjectException: partition values=[1]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:2003) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source) ~[?:?]&lt;BR /&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181]&lt;BR /&gt;at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at com.sun.proxy.$Proxy26.getPartition(Unknown Source) ~[?:?]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:3553) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.events.InsertEvent.&amp;lt;init&amp;gt;(InsertEvent.java:62) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.fire_listener_event(HiveMetaStore.java:6737) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at sun.reflect.GeneratedMethodAccessor100.invoke(Unknown Source) ~[?:?]&lt;BR /&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181]&lt;BR /&gt;at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at com.sun.proxy.$Proxy28.fire_listener_event(Unknown Source) ~[?:?]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$fire_listener_event.getResult(ThriftHiveMetastore.java:14208) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$fire_listener_event.getResult(ThriftHiveMetastore.java:14193) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:594) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:589) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_181]&lt;BR /&gt;at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_181]&lt;BR /&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) [hadoop-common-3.0.0-cdh6.2.1.jar:?]&lt;BR /&gt;at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:589) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]&lt;BR /&gt;at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]&lt;BR /&gt;&lt;BR /&gt;=================&lt;BR /&gt;&lt;STRONG&gt;the hiveserver2 error log:&lt;/STRONG&gt;&lt;BR /&gt;2021-03-17 15:09:30,016 INFO org.apache.hadoop.hive.ql.exec.MoveTask: [HiveServer2-Background-Pool: Thread-183022]: Partition is: {ptdate=1}&lt;BR /&gt;2021-03-17 15:09:30,033 INFO org.apache.hadoop.hive.common.FileUtils: [HiveServer2-Background-Pool: Thread-183022]: Creating directory if it doesn't exist: hdfs://dev-dw-nn01:8020/user/hive/warehouse/apollo_ods_jzfix.db/test0317/ptdate=1&lt;BR /&gt;2021-03-17 15:09:30,057 WARN org.apache.hadoop.hive.metastore.RetryingMetaStoreClient: [HiveServer2-Background-Pool: Thread-183022]: MetaStoreClient lost connection. Attempting to reconnect.&lt;BR /&gt;org.apache.thrift.TApplicationException: Internal error processing fire_listener_event&lt;BR /&gt;at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4836) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4823) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:2531) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at sun.reflect.GeneratedMethodAccessor88.invoke(Unknown Source) ~[?:?]&lt;BR /&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181]&lt;BR /&gt;at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:154) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at com.sun.proxy.$Proxy34.fireListenerEvent(Unknown Source) [?:?]&lt;BR /&gt;at sun.reflect.GeneratedMethodAccessor88.invoke(Unknown Source) ~[?:?]&lt;BR /&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181]&lt;BR /&gt;at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2562) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at com.sun.proxy.$Proxy34.fireListenerEvent(Unknown Source) [?:?]&lt;BR /&gt;at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:2431) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.metadata.Hive.loadPartitionInternal(Hive.java:1629) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1525) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1489) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:501) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2200) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1843) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1563) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1339) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1334) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:256) [hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hive.service.cli.operation.SQLOperation.access$600(SQLOperation.java:92) [hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:345) [hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_181]&lt;BR /&gt;at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_181]&lt;BR /&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) [hadoop-common-3.0.0-cdh6.2.1.jar:?]&lt;BR /&gt;at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:357) [hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_181]&lt;BR /&gt;at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_181]&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]&lt;BR /&gt;at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]&lt;BR /&gt;2021-03-17 15:09:31,063 INFO hive.metastore: [HiveServer2-Background-Pool: Thread-183022]: Closed a connection to metastore, current connections: 9231&lt;BR /&gt;2021-03-17 15:09:31,063 INFO hive.metastore: [HiveServer2-Background-Pool: Thread-183022]: Trying to connect to metastore with URI thrift://dev-dw-dn01:9083&lt;BR /&gt;2021-03-17 15:09:31,065 INFO hive.metastore: [HiveServer2-Background-Pool: Thread-183022]: Opened a connection to metastore, current connections: 9232&lt;BR /&gt;2021-03-17 15:09:31,065 INFO hive.metastore: [HiveServer2-Background-Pool: Thread-183022]: Connected to metastore.&lt;BR /&gt;2021-03-17 15:09:31,196 ERROR org.apache.hadoop.hive.ql.Driver: [HiveServer2-Background-Pool: Thread-183022]: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask. org.apache.thrift.TApplicationException: Internal error processing fire_lis&lt;BR /&gt;tener_event2021-03-17 15:09:31,196 INFO org.apache.hadoop.hive.ql.Driver: [HiveServer2-Background-Pool: Thread-183022]: MapReduce Jobs Launched:&lt;BR /&gt;2021-03-17 15:09:31,197 INFO org.apache.hadoop.hive.ql.Driver: [HiveServer2-Background-Pool: Thread-183022]: Stage-Stage-1: Map: 1 Cumulative CPU: 2.56 sec HDFS Read: 5344 HDFS Write: 609 HDFS EC Read: 0 SUCCESS&lt;BR /&gt;2021-03-17 15:09:31,197 INFO org.apache.hadoop.hive.ql.Driver: [HiveServer2-Background-Pool: Thread-183022]: Total MapReduce CPU Time Spent: 2 seconds 560 msec&lt;BR /&gt;2021-03-17 15:09:31,197 INFO org.apache.hadoop.hive.ql.Driver: [HiveServer2-Background-Pool: Thread-183022]: Completed executing command(queryId=hive_20210317150913_9d734c54-f0cf-4dc7-9117-bc7f59c2cb61); Time taken: 17.758 seconds&lt;BR /&gt;2021-03-17 15:09:31,206 ERROR org.apache.hive.service.cli.operation.Operation: [HiveServer2-Background-Pool: Thread-183022]: Error running hive query:&lt;BR /&gt;org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask. org.apache.thrift.TApplicationException: Internal error processing fire_listener_event&lt;BR /&gt;at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:329) ~[hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:258) ~[hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hive.service.cli.operation.SQLOperation.access$600(SQLOperation.java:92) ~[hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:345) [hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_181]&lt;BR /&gt;at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_181]&lt;BR /&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) [hadoop-common-3.0.0-cdh6.2.1.jar:?]&lt;BR /&gt;at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:357) [hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_181]&lt;BR /&gt;at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_181]&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]&lt;BR /&gt;at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]&lt;BR /&gt;Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.thrift.TApplicationException: Internal error processing fire_listener_event&lt;BR /&gt;at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:2433) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.metadata.Hive.loadPartitionInternal(Hive.java:1629) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1525) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1489) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:501) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2200) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1843) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1563) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1339) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1334) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:256) ~[hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;... 11 more&lt;BR /&gt;Caused by: org.apache.thrift.TApplicationException: Internal error processing fire_listener_event&lt;BR /&gt;at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4836) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4823) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:2531) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at sun.reflect.GeneratedMethodAccessor88.invoke(Unknown Source) ~[?:?]&lt;BR /&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181]&lt;BR /&gt;at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:154) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at com.sun.proxy.$Proxy34.fireListenerEvent(Unknown Source) ~[?:?]&lt;BR /&gt;at sun.reflect.GeneratedMethodAccessor88.invoke(Unknown Source) ~[?:?]&lt;BR /&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181]&lt;BR /&gt;at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2562) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at com.sun.proxy.$Proxy34.fireListenerEvent(Unknown Source) ~[?:?]&lt;BR /&gt;at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:2431) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.metadata.Hive.loadPartitionInternal(Hive.java:1629) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1525) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1489) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:501) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2200) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1843) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1563) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1339) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1334) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:256) ~[hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;... 11 more&lt;BR /&gt;======================&lt;BR /&gt;&lt;STRONG&gt;if you are using "insert into" to insert data into new partitons of a partitoined table, then there are no problems. The corresponding metastore log:&lt;/STRONG&gt;&lt;BR /&gt;2021-03-17 15:40:17,770 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_table : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:17,770 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_table : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:17,787 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_table : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:17,787 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_table : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:17,802 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5]&lt;BR /&gt;2021-03-17 15:40:17,802 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5]&lt;BR /&gt;2021-03-17 15:40:17,871 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_table : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:17,871 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_table : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:17,880 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_indexes : db=_dummy_database tbl=_dummy_table&lt;BR /&gt;2021-03-17 15:40:17,880 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_indexes : db=_dummy_database tbl=_dummy_table&lt;BR /&gt;2021-03-17 15:40:17,883 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_indexes : db=_dummy_database tbl=_dummy_table&lt;BR /&gt;2021-03-17 15:40:17,883 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_indexes : db=_dummy_database tbl=_dummy_table&lt;BR /&gt;2021-03-17 15:40:17,918 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_partitions_ps_with_auth : db=apollo_ods_jzfix tbl=test0317[5]&lt;BR /&gt;2021-03-17 15:40:17,918 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_partitions_ps_with_auth : db=apollo_ods_jzfix tbl=test0317[5]&lt;BR /&gt;2021-03-17 15:40:17,936 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5]&lt;BR /&gt;2021-03-17 15:40:17,936 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5]&lt;BR /&gt;2021-03-17 15:40:35,434 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: Cleaning up thread local RawStore...&lt;BR /&gt;2021-03-17 15:40:35,435 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=Cleaning up thread local RawStore...&lt;BR /&gt;2021-03-17 15:40:35,435 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: Done cleaning up thread local RawStore&lt;BR /&gt;2021-03-17 15:40:35,435 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=Done cleaning up thread local RawStore&lt;BR /&gt;2021-03-17 15:40:35,465 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_table : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:35,466 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_table : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:35,486 WARN org.apache.hadoop.hive.conf.HiveConf: [pool-9-thread-153]: HiveConf of name hive.server2.queue.access.check does not exist&lt;BR /&gt;2021-03-17 15:40:35,486 WARN org.apache.hadoop.hive.conf.HiveConf: [pool-9-thread-153]: HiveConf of name hive.server2.sessions.custom.queue.allowed does not exist&lt;BR /&gt;2021-03-17 15:40:35,486 WARN org.apache.hadoop.hive.conf.HiveConf: [pool-9-thread-153]: HiveConf of name hive.sentry.conf.url does not exist&lt;BR /&gt;2021-03-17 15:40:35,486 WARN org.apache.hadoop.hive.conf.HiveConf: [pool-9-thread-153]: HiveConf of name hive.server2.initialize.default.sessions does not exist&lt;BR /&gt;2021-03-17 15:40:35,486 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore&lt;BR /&gt;2021-03-17 15:40:35,583 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: partition_name_has_valid_characters&lt;BR /&gt;2021-03-17 15:40:35,583 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=partition_name_has_valid_characters&lt;BR /&gt;2021-03-17 15:40:35,583 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_table : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:35,583 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_table : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:35,590 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5]&lt;BR /&gt;2021-03-17 15:40:35,590 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5]&lt;BR /&gt;2021-03-17 15:40:35,610 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: add_partition : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:35,610 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=add_partition : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:35,746 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5]&lt;BR /&gt;2021-03-17 15:40:35,746 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5]&lt;BR /&gt;2021-03-17 15:40:35,764 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: Cleaning up thread local RawStore...&lt;BR /&gt;2021-03-17 15:40:35,764 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=Cleaning up thread local RawStore...&lt;BR /&gt;2021-03-17 15:40:35,764 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: Done cleaning up thread local RawStore&lt;BR /&gt;2021-03-17 15:40:35,764 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=Done cleaning up thread local RawStore&lt;BR /&gt;2021-03-17 15:40:35,766 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_table : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:35,767 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_table : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:35,787 WARN org.apache.hadoop.hive.conf.HiveConf: [pool-9-thread-153]: HiveConf of name hive.server2.queue.access.check does not exist&lt;BR /&gt;2021-03-17 15:40:35,787 WARN org.apache.hadoop.hive.conf.HiveConf: [pool-9-thread-153]: HiveConf of name hive.server2.sessions.custom.queue.allowed does not exist&lt;BR /&gt;2021-03-17 15:40:35,787 WARN org.apache.hadoop.hive.conf.HiveConf: [pool-9-thread-153]: HiveConf of name hive.sentry.conf.url does not exist&lt;BR /&gt;2021-03-17 15:40:35,787 WARN org.apache.hadoop.hive.conf.HiveConf: [pool-9-thread-153]: HiveConf of name hive.server2.initialize.default.sessions does not exist&lt;BR /&gt;2021-03-17 15:40:35,787 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore&lt;BR /&gt;2021-03-17 15:40:35,888 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_table : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:35,888 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_table : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:35,897 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5]&lt;BR /&gt;2021-03-17 15:40:35,897 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5]&lt;BR /&gt;2021-03-17 15:40:35,915 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: alter_partitions : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:35,915 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=alter_partitions : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:35,915 WARN org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics: [pool-9-thread-153]: Scope named api_alter_partitions is not closed, cannot be opened.&lt;BR /&gt;2021-03-17 15:40:35,915 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: New partition values:[5]&lt;BR /&gt;2021-03-17 15:40:35,929 WARN hive.log: [pool-9-thread-153]: Updating partition stats fast for: test0317&lt;BR /&gt;2021-03-17 15:40:35,942 WARN hive.log: [pool-9-thread-153]: Updated size to 519&lt;/P&gt;</description>
    <pubDate>Fri, 16 Sep 2022 14:41:05 GMT</pubDate>
    <dc:creator>michaelli</dc:creator>
    <dc:date>2022-09-16T14:41:05Z</dc:date>
    <item>
      <title>unable to insert overwrite partitioned hive table in cdh6.2.1 (with kerberos and sentry enabled)</title>
      <link>https://community.cloudera.com/t5/Support-Questions/unable-to-insert-overwrite-partitioned-hive-table-in-cdh6-2/m-p/313526#M225574</link>
      <description>&lt;P&gt;&lt;STRONG&gt;Issue summary:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;In cdh6.2.1，with kerberos and sentry enabled, we are getting issues when using statement "insert overwrite" to insert data into new partitions of parttioned table, the sql statement, and detailed metastore and hiveserver2 logs are attached.&lt;/P&gt;
&lt;P&gt;Note:&lt;BR /&gt;1. Event though errors are throwed (and show partions xxx will not dispaly the new partition), the underneath hdfs directory and files for the corresponding partition are created successfully.&lt;BR /&gt;2. after errors throwed for the "insert overwrite" statement, we can use msck repair tablexxx to fix the hive metastore data for the talbe, and after that, we can use "show partitions" to dispaly the new created partition successfully, and use "select xx" to query the new inserted partition data successfully:&lt;BR /&gt;3. This happens for both static mode partitioning and dynamic mode partitoning；( as long as you are inserting data to new partitions）&lt;BR /&gt;4. if you are using insert overwrite to insert data into an existing partion (the partition can be either empty or not empty, this does matter), there will not be any issue；&lt;BR /&gt;5. if you are using insert into to insert data, then there are no problems;&lt;BR /&gt;6. if you are using non partioned table, then both insert overwrite and insert into have no problem;&lt;/P&gt;
&lt;P&gt;currently, we are manually creating needed partitions before executing“insert overwite”to overcome this (like alter table test0317 add partition (ptdate=10);). But this is not a logn-term solution for this. Please help.&lt;/P&gt;
&lt;P&gt;===================&lt;BR /&gt;&lt;STRONG&gt;the sql statements we used:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;use apollo_ods_jzfix;&lt;/P&gt;
&lt;P&gt;create table test0317&lt;BR /&gt;(user_code decimal(10),&lt;BR /&gt;account decimal(19)&lt;BR /&gt;)&lt;BR /&gt;partitioned by(ptdate string)&lt;BR /&gt;stored as parquet;&lt;/P&gt;
&lt;P&gt;set hive.exec.dynamic.partition.mode=nonstrict;&lt;/P&gt;
&lt;P&gt;insert overwrite table test0317 partition(ptdate = "10")&lt;BR /&gt;select * from(&lt;BR /&gt;select 2 as user_code, 3 as account)a;&lt;/P&gt;
&lt;P&gt;insert overwrite table test0317 partition(ptdate)&lt;BR /&gt;select * from (&lt;BR /&gt;select 1 as user_code,3 as account,"8" as ptdate&lt;BR /&gt;union all&lt;BR /&gt;select 1 as user_code,3 as account,"9" as ptdate&lt;BR /&gt;) a;&lt;/P&gt;
&lt;P&gt;=========================&lt;BR /&gt;&lt;STRONG&gt;The client side error log:(beeline)&lt;/STRONG&gt;&lt;BR /&gt;INFO : Loading data to table apollo_ods_jzfix.test0317 partition (ptdate=1) from hdfs://dev-dw-nn01:8020/user/hive/warehouse/apollo_ods_jzfix.db/test0317/ptdate=1/.hive-staging_hive_2021-03-17_15-09-13_232_1543365768355672834-7333/-ext-10000&lt;BR /&gt;ERROR : FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask. org.apache.thrift.TApplicationException: Internal error processing fire_listener_event&lt;BR /&gt;INFO : MapReduce Jobs Launched:&lt;BR /&gt;INFO : Stage-Stage-1: Map: 1 Cumulative CPU: 2.56 sec HDFS Read: 5344 HDFS Write: 609 HDFS EC Read: 0 SUCCESS&lt;BR /&gt;INFO : Total MapReduce CPU Time Spent: 2 seconds 560 msec&lt;BR /&gt;INFO : Completed executing command(queryId=hive_20210317150913_9d734c54-f0cf-4dc7-9117-bc7f59c2cb61); Time taken: 17.758 seconds&lt;BR /&gt;Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask. org.apache.thrift.TApplicationException: Internal error processing fire_listener_event (state=08S01,code=1)&lt;BR /&gt;0: jdbc:hive2://dev-dw-nn01:10000/&amp;gt;&lt;/P&gt;
&lt;P&gt;===============&lt;BR /&gt;&lt;STRONG&gt;the hive metastore error log:&lt;/STRONG&gt;&lt;BR /&gt;2021-03-17 15:09:30,039 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-122]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_table : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:09:30,044 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-122]: 121: get_partition : db=apollo_ods_jzfix tbl=test0317[1]&lt;BR /&gt;2021-03-17 15:09:30,044 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-122]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_partition : db=apollo_ods_jzfix tbl=test0317[1]&lt;BR /&gt;2021-03-17 15:09:30,053 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-9-thread-122]: NoSuchObjectException(message:partition values=[1])&lt;BR /&gt;at org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:2003)&lt;BR /&gt;at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)&lt;BR /&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)&lt;BR /&gt;at java.lang.reflect.Method.invoke(Method.java:498)&lt;BR /&gt;at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101)&lt;BR /&gt;at com.sun.proxy.$Proxy26.getPartition(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:3553)&lt;BR /&gt;at org.apache.hadoop.hive.metastore.events.InsertEvent.&amp;lt;init&amp;gt;(InsertEvent.java:62)&lt;BR /&gt;at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.fire_listener_event(HiveMetaStore.java:6737)&lt;BR /&gt;at sun.reflect.GeneratedMethodAccessor100.invoke(Unknown Source)&lt;BR /&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)&lt;BR /&gt;at java.lang.reflect.Method.invoke(Method.java:498)&lt;BR /&gt;at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140)&lt;BR /&gt;at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99)&lt;BR /&gt;at com.sun.proxy.$Proxy28.fire_listener_event(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$fire_listener_event.getResult(ThriftHiveMetastore.java:14208)&lt;BR /&gt;at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$fire_listener_event.getResult(ThriftHiveMetastore.java:14193)&lt;BR /&gt;at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)&lt;BR /&gt;at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)&lt;BR /&gt;at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:594)&lt;BR /&gt;at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:589)&lt;BR /&gt;at java.security.AccessController.doPrivileged(Native Method)&lt;BR /&gt;at javax.security.auth.Subject.doAs(Subject.java:422)&lt;BR /&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)&lt;BR /&gt;at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:589)&lt;BR /&gt;at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)&lt;BR /&gt;at java.lang.Thread.run(Thread.java:748)&lt;/P&gt;
&lt;P&gt;2021-03-17 15:09:30,054 ERROR org.apache.thrift.ProcessFunction: [pool-9-thread-122]: Internal error processing fire_listener_event&lt;BR /&gt;org.apache.hadoop.hive.metastore.api.NoSuchObjectException: partition values=[1]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:2003) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source) ~[?:?]&lt;BR /&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181]&lt;BR /&gt;at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at com.sun.proxy.$Proxy26.getPartition(Unknown Source) ~[?:?]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:3553) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.events.InsertEvent.&amp;lt;init&amp;gt;(InsertEvent.java:62) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.fire_listener_event(HiveMetaStore.java:6737) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at sun.reflect.GeneratedMethodAccessor100.invoke(Unknown Source) ~[?:?]&lt;BR /&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181]&lt;BR /&gt;at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at com.sun.proxy.$Proxy28.fire_listener_event(Unknown Source) ~[?:?]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$fire_listener_event.getResult(ThriftHiveMetastore.java:14208) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$fire_listener_event.getResult(ThriftHiveMetastore.java:14193) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:594) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:589) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_181]&lt;BR /&gt;at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_181]&lt;BR /&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) [hadoop-common-3.0.0-cdh6.2.1.jar:?]&lt;BR /&gt;at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:589) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]&lt;BR /&gt;at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]&lt;BR /&gt;&lt;BR /&gt;=================&lt;BR /&gt;&lt;STRONG&gt;the hiveserver2 error log:&lt;/STRONG&gt;&lt;BR /&gt;2021-03-17 15:09:30,016 INFO org.apache.hadoop.hive.ql.exec.MoveTask: [HiveServer2-Background-Pool: Thread-183022]: Partition is: {ptdate=1}&lt;BR /&gt;2021-03-17 15:09:30,033 INFO org.apache.hadoop.hive.common.FileUtils: [HiveServer2-Background-Pool: Thread-183022]: Creating directory if it doesn't exist: hdfs://dev-dw-nn01:8020/user/hive/warehouse/apollo_ods_jzfix.db/test0317/ptdate=1&lt;BR /&gt;2021-03-17 15:09:30,057 WARN org.apache.hadoop.hive.metastore.RetryingMetaStoreClient: [HiveServer2-Background-Pool: Thread-183022]: MetaStoreClient lost connection. Attempting to reconnect.&lt;BR /&gt;org.apache.thrift.TApplicationException: Internal error processing fire_listener_event&lt;BR /&gt;at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4836) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4823) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:2531) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at sun.reflect.GeneratedMethodAccessor88.invoke(Unknown Source) ~[?:?]&lt;BR /&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181]&lt;BR /&gt;at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:154) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at com.sun.proxy.$Proxy34.fireListenerEvent(Unknown Source) [?:?]&lt;BR /&gt;at sun.reflect.GeneratedMethodAccessor88.invoke(Unknown Source) ~[?:?]&lt;BR /&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181]&lt;BR /&gt;at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2562) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at com.sun.proxy.$Proxy34.fireListenerEvent(Unknown Source) [?:?]&lt;BR /&gt;at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:2431) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.metadata.Hive.loadPartitionInternal(Hive.java:1629) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1525) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1489) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:501) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2200) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1843) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1563) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1339) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1334) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:256) [hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hive.service.cli.operation.SQLOperation.access$600(SQLOperation.java:92) [hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:345) [hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_181]&lt;BR /&gt;at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_181]&lt;BR /&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) [hadoop-common-3.0.0-cdh6.2.1.jar:?]&lt;BR /&gt;at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:357) [hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_181]&lt;BR /&gt;at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_181]&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]&lt;BR /&gt;at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]&lt;BR /&gt;2021-03-17 15:09:31,063 INFO hive.metastore: [HiveServer2-Background-Pool: Thread-183022]: Closed a connection to metastore, current connections: 9231&lt;BR /&gt;2021-03-17 15:09:31,063 INFO hive.metastore: [HiveServer2-Background-Pool: Thread-183022]: Trying to connect to metastore with URI thrift://dev-dw-dn01:9083&lt;BR /&gt;2021-03-17 15:09:31,065 INFO hive.metastore: [HiveServer2-Background-Pool: Thread-183022]: Opened a connection to metastore, current connections: 9232&lt;BR /&gt;2021-03-17 15:09:31,065 INFO hive.metastore: [HiveServer2-Background-Pool: Thread-183022]: Connected to metastore.&lt;BR /&gt;2021-03-17 15:09:31,196 ERROR org.apache.hadoop.hive.ql.Driver: [HiveServer2-Background-Pool: Thread-183022]: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask. org.apache.thrift.TApplicationException: Internal error processing fire_lis&lt;BR /&gt;tener_event2021-03-17 15:09:31,196 INFO org.apache.hadoop.hive.ql.Driver: [HiveServer2-Background-Pool: Thread-183022]: MapReduce Jobs Launched:&lt;BR /&gt;2021-03-17 15:09:31,197 INFO org.apache.hadoop.hive.ql.Driver: [HiveServer2-Background-Pool: Thread-183022]: Stage-Stage-1: Map: 1 Cumulative CPU: 2.56 sec HDFS Read: 5344 HDFS Write: 609 HDFS EC Read: 0 SUCCESS&lt;BR /&gt;2021-03-17 15:09:31,197 INFO org.apache.hadoop.hive.ql.Driver: [HiveServer2-Background-Pool: Thread-183022]: Total MapReduce CPU Time Spent: 2 seconds 560 msec&lt;BR /&gt;2021-03-17 15:09:31,197 INFO org.apache.hadoop.hive.ql.Driver: [HiveServer2-Background-Pool: Thread-183022]: Completed executing command(queryId=hive_20210317150913_9d734c54-f0cf-4dc7-9117-bc7f59c2cb61); Time taken: 17.758 seconds&lt;BR /&gt;2021-03-17 15:09:31,206 ERROR org.apache.hive.service.cli.operation.Operation: [HiveServer2-Background-Pool: Thread-183022]: Error running hive query:&lt;BR /&gt;org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask. org.apache.thrift.TApplicationException: Internal error processing fire_listener_event&lt;BR /&gt;at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:329) ~[hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:258) ~[hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hive.service.cli.operation.SQLOperation.access$600(SQLOperation.java:92) ~[hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:345) [hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_181]&lt;BR /&gt;at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_181]&lt;BR /&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) [hadoop-common-3.0.0-cdh6.2.1.jar:?]&lt;BR /&gt;at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:357) [hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_181]&lt;BR /&gt;at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_181]&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]&lt;BR /&gt;at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]&lt;BR /&gt;Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.thrift.TApplicationException: Internal error processing fire_listener_event&lt;BR /&gt;at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:2433) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.metadata.Hive.loadPartitionInternal(Hive.java:1629) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1525) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1489) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:501) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2200) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1843) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1563) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1339) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1334) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:256) ~[hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;... 11 more&lt;BR /&gt;Caused by: org.apache.thrift.TApplicationException: Internal error processing fire_listener_event&lt;BR /&gt;at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4836) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4823) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:2531) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at sun.reflect.GeneratedMethodAccessor88.invoke(Unknown Source) ~[?:?]&lt;BR /&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181]&lt;BR /&gt;at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:154) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at com.sun.proxy.$Proxy34.fireListenerEvent(Unknown Source) ~[?:?]&lt;BR /&gt;at sun.reflect.GeneratedMethodAccessor88.invoke(Unknown Source) ~[?:?]&lt;BR /&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181]&lt;BR /&gt;at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2562) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at com.sun.proxy.$Proxy34.fireListenerEvent(Unknown Source) ~[?:?]&lt;BR /&gt;at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:2431) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.metadata.Hive.loadPartitionInternal(Hive.java:1629) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1525) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1489) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:501) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2200) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1843) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1563) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1339) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1334) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:256) ~[hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]&lt;BR /&gt;... 11 more&lt;BR /&gt;======================&lt;BR /&gt;&lt;STRONG&gt;if you are using "insert into" to insert data into new partitons of a partitoined table, then there are no problems. The corresponding metastore log:&lt;/STRONG&gt;&lt;BR /&gt;2021-03-17 15:40:17,770 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_table : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:17,770 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_table : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:17,787 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_table : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:17,787 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_table : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:17,802 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5]&lt;BR /&gt;2021-03-17 15:40:17,802 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5]&lt;BR /&gt;2021-03-17 15:40:17,871 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_table : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:17,871 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_table : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:17,880 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_indexes : db=_dummy_database tbl=_dummy_table&lt;BR /&gt;2021-03-17 15:40:17,880 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_indexes : db=_dummy_database tbl=_dummy_table&lt;BR /&gt;2021-03-17 15:40:17,883 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_indexes : db=_dummy_database tbl=_dummy_table&lt;BR /&gt;2021-03-17 15:40:17,883 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_indexes : db=_dummy_database tbl=_dummy_table&lt;BR /&gt;2021-03-17 15:40:17,918 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_partitions_ps_with_auth : db=apollo_ods_jzfix tbl=test0317[5]&lt;BR /&gt;2021-03-17 15:40:17,918 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_partitions_ps_with_auth : db=apollo_ods_jzfix tbl=test0317[5]&lt;BR /&gt;2021-03-17 15:40:17,936 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5]&lt;BR /&gt;2021-03-17 15:40:17,936 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5]&lt;BR /&gt;2021-03-17 15:40:35,434 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: Cleaning up thread local RawStore...&lt;BR /&gt;2021-03-17 15:40:35,435 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=Cleaning up thread local RawStore...&lt;BR /&gt;2021-03-17 15:40:35,435 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: Done cleaning up thread local RawStore&lt;BR /&gt;2021-03-17 15:40:35,435 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=Done cleaning up thread local RawStore&lt;BR /&gt;2021-03-17 15:40:35,465 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_table : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:35,466 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_table : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:35,486 WARN org.apache.hadoop.hive.conf.HiveConf: [pool-9-thread-153]: HiveConf of name hive.server2.queue.access.check does not exist&lt;BR /&gt;2021-03-17 15:40:35,486 WARN org.apache.hadoop.hive.conf.HiveConf: [pool-9-thread-153]: HiveConf of name hive.server2.sessions.custom.queue.allowed does not exist&lt;BR /&gt;2021-03-17 15:40:35,486 WARN org.apache.hadoop.hive.conf.HiveConf: [pool-9-thread-153]: HiveConf of name hive.sentry.conf.url does not exist&lt;BR /&gt;2021-03-17 15:40:35,486 WARN org.apache.hadoop.hive.conf.HiveConf: [pool-9-thread-153]: HiveConf of name hive.server2.initialize.default.sessions does not exist&lt;BR /&gt;2021-03-17 15:40:35,486 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore&lt;BR /&gt;2021-03-17 15:40:35,583 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: partition_name_has_valid_characters&lt;BR /&gt;2021-03-17 15:40:35,583 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=partition_name_has_valid_characters&lt;BR /&gt;2021-03-17 15:40:35,583 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_table : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:35,583 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_table : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:35,590 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5]&lt;BR /&gt;2021-03-17 15:40:35,590 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5]&lt;BR /&gt;2021-03-17 15:40:35,610 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: add_partition : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:35,610 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=add_partition : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:35,746 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5]&lt;BR /&gt;2021-03-17 15:40:35,746 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5]&lt;BR /&gt;2021-03-17 15:40:35,764 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: Cleaning up thread local RawStore...&lt;BR /&gt;2021-03-17 15:40:35,764 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=Cleaning up thread local RawStore...&lt;BR /&gt;2021-03-17 15:40:35,764 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: Done cleaning up thread local RawStore&lt;BR /&gt;2021-03-17 15:40:35,764 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=Done cleaning up thread local RawStore&lt;BR /&gt;2021-03-17 15:40:35,766 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_table : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:35,767 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_table : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:35,787 WARN org.apache.hadoop.hive.conf.HiveConf: [pool-9-thread-153]: HiveConf of name hive.server2.queue.access.check does not exist&lt;BR /&gt;2021-03-17 15:40:35,787 WARN org.apache.hadoop.hive.conf.HiveConf: [pool-9-thread-153]: HiveConf of name hive.server2.sessions.custom.queue.allowed does not exist&lt;BR /&gt;2021-03-17 15:40:35,787 WARN org.apache.hadoop.hive.conf.HiveConf: [pool-9-thread-153]: HiveConf of name hive.sentry.conf.url does not exist&lt;BR /&gt;2021-03-17 15:40:35,787 WARN org.apache.hadoop.hive.conf.HiveConf: [pool-9-thread-153]: HiveConf of name hive.server2.initialize.default.sessions does not exist&lt;BR /&gt;2021-03-17 15:40:35,787 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore&lt;BR /&gt;2021-03-17 15:40:35,888 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_table : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:35,888 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_table : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:35,897 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5]&lt;BR /&gt;2021-03-17 15:40:35,897 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5]&lt;BR /&gt;2021-03-17 15:40:35,915 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: alter_partitions : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:35,915 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=alter_partitions : db=apollo_ods_jzfix tbl=test0317&lt;BR /&gt;2021-03-17 15:40:35,915 WARN org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics: [pool-9-thread-153]: Scope named api_alter_partitions is not closed, cannot be opened.&lt;BR /&gt;2021-03-17 15:40:35,915 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: New partition values:[5]&lt;BR /&gt;2021-03-17 15:40:35,929 WARN hive.log: [pool-9-thread-153]: Updating partition stats fast for: test0317&lt;BR /&gt;2021-03-17 15:40:35,942 WARN hive.log: [pool-9-thread-153]: Updated size to 519&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 14:41:05 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/unable-to-insert-overwrite-partitioned-hive-table-in-cdh6-2/m-p/313526#M225574</guid>
      <dc:creator>michaelli</dc:creator>
      <dc:date>2022-09-16T14:41:05Z</dc:date>
    </item>
    <item>
      <title>Re: unable to insert overwrite partitioned hive table in cdh6.2.1 (with kerberos and sentry enabled)</title>
      <link>https://community.cloudera.com/t5/Support-Questions/unable-to-insert-overwrite-partitioned-hive-table-in-cdh6-2/m-p/322857#M228951</link>
      <description>&lt;P&gt;This is a bug fixed by HIVE-15642. You can work around this issue by setting&amp;nbsp;hive.metastore.dml.events to false.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 20 Aug 2021 01:47:23 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/unable-to-insert-overwrite-partitioned-hive-table-in-cdh6-2/m-p/322857#M228951</guid>
      <dc:creator>robbiez</dc:creator>
      <dc:date>2021-08-20T01:47:23Z</dc:date>
    </item>
  </channel>
</rss>

