Support Questions

Find answers, ask questions, and share your expertise

unable to insert overwrite partitioned hive table in cdh6.2.1 (with kerberos and sentry enabled)

avatar
Explorer

Issue summary:

In cdh6.2.1,with kerberos and sentry enabled, we are getting issues when using statement "insert overwrite" to insert data into new partitions of parttioned table, the sql statement, and detailed metastore and hiveserver2 logs are attached.

Note:
1. Event though errors are throwed (and show partions xxx will not dispaly the new partition), the underneath hdfs directory and files for the corresponding partition are created successfully.
2. after errors throwed for the "insert overwrite" statement, we can use msck repair tablexxx to fix the hive metastore data for the talbe, and after that, we can use "show partitions" to dispaly the new created partition successfully, and use "select xx" to query the new inserted partition data successfully:
3. This happens for both static mode partitioning and dynamic mode partitoning;( as long as you are inserting data to new partitions)
4. if you are using insert overwrite to insert data into an existing partion (the partition can be either empty or not empty, this does matter), there will not be any issue;
5. if you are using insert into to insert data, then there are no problems;
6. if you are using non partioned table, then both insert overwrite and insert into have no problem;

currently, we are manually creating needed partitions before executing“insert overwite”to overcome this (like alter table test0317 add partition (ptdate=10);). But this is not a logn-term solution for this. Please help.

===================
the sql statements we used:

use apollo_ods_jzfix;

create table test0317
(user_code decimal(10),
account decimal(19)
)
partitioned by(ptdate string)
stored as parquet;

set hive.exec.dynamic.partition.mode=nonstrict;

insert overwrite table test0317 partition(ptdate = "10")
select * from(
select 2 as user_code, 3 as account)a;

insert overwrite table test0317 partition(ptdate)
select * from (
select 1 as user_code,3 as account,"8" as ptdate
union all
select 1 as user_code,3 as account,"9" as ptdate
) a;

=========================
The client side error log:(beeline)
INFO : Loading data to table apollo_ods_jzfix.test0317 partition (ptdate=1) from hdfs://dev-dw-nn01:8020/user/hive/warehouse/apollo_ods_jzfix.db/test0317/ptdate=1/.hive-staging_hive_2021-03-17_15-09-13_232_1543365768355672834-7333/-ext-10000
ERROR : FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask. org.apache.thrift.TApplicationException: Internal error processing fire_listener_event
INFO : MapReduce Jobs Launched:
INFO : Stage-Stage-1: Map: 1 Cumulative CPU: 2.56 sec HDFS Read: 5344 HDFS Write: 609 HDFS EC Read: 0 SUCCESS
INFO : Total MapReduce CPU Time Spent: 2 seconds 560 msec
INFO : Completed executing command(queryId=hive_20210317150913_9d734c54-f0cf-4dc7-9117-bc7f59c2cb61); Time taken: 17.758 seconds
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask. org.apache.thrift.TApplicationException: Internal error processing fire_listener_event (state=08S01,code=1)
0: jdbc:hive2://dev-dw-nn01:10000/>

===============
the hive metastore error log:
2021-03-17 15:09:30,039 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-122]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_table : db=apollo_ods_jzfix tbl=test0317
2021-03-17 15:09:30,044 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-122]: 121: get_partition : db=apollo_ods_jzfix tbl=test0317[1]
2021-03-17 15:09:30,044 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-122]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_partition : db=apollo_ods_jzfix tbl=test0317[1]
2021-03-17 15:09:30,053 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-9-thread-122]: NoSuchObjectException(message:partition values=[1])
at org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:2003)
at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101)
at com.sun.proxy.$Proxy26.getPartition(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:3553)
at org.apache.hadoop.hive.metastore.events.InsertEvent.<init>(InsertEvent.java:62)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.fire_listener_event(HiveMetaStore.java:6737)
at sun.reflect.GeneratedMethodAccessor100.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99)
at com.sun.proxy.$Proxy28.fire_listener_event(Unknown Source)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$fire_listener_event.getResult(ThriftHiveMetastore.java:14208)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$fire_listener_event.getResult(ThriftHiveMetastore.java:14193)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:594)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:589)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:589)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

2021-03-17 15:09:30,054 ERROR org.apache.thrift.ProcessFunction: [pool-9-thread-122]: Internal error processing fire_listener_event
org.apache.hadoop.hive.metastore.api.NoSuchObjectException: partition values=[1]
at org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:2003) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181]
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at com.sun.proxy.$Proxy26.getPartition(Unknown Source) ~[?:?]
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:3553) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.metastore.events.InsertEvent.<init>(InsertEvent.java:62) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.fire_listener_event(HiveMetaStore.java:6737) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at sun.reflect.GeneratedMethodAccessor100.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181]
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at com.sun.proxy.$Proxy28.fire_listener_event(Unknown Source) ~[?:?]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$fire_listener_event.getResult(ThriftHiveMetastore.java:14208) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$fire_listener_event.getResult(ThriftHiveMetastore.java:14193) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:594) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:589) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_181]
at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_181]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) [hadoop-common-3.0.0-cdh6.2.1.jar:?]
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:589) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]

=================
the hiveserver2 error log:
2021-03-17 15:09:30,016 INFO org.apache.hadoop.hive.ql.exec.MoveTask: [HiveServer2-Background-Pool: Thread-183022]: Partition is: {ptdate=1}
2021-03-17 15:09:30,033 INFO org.apache.hadoop.hive.common.FileUtils: [HiveServer2-Background-Pool: Thread-183022]: Creating directory if it doesn't exist: hdfs://dev-dw-nn01:8020/user/hive/warehouse/apollo_ods_jzfix.db/test0317/ptdate=1
2021-03-17 15:09:30,057 WARN org.apache.hadoop.hive.metastore.RetryingMetaStoreClient: [HiveServer2-Background-Pool: Thread-183022]: MetaStoreClient lost connection. Attempting to reconnect.
org.apache.thrift.TApplicationException: Internal error processing fire_listener_event
at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4836) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4823) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:2531) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at sun.reflect.GeneratedMethodAccessor88.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181]
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:154) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at com.sun.proxy.$Proxy34.fireListenerEvent(Unknown Source) [?:?]
at sun.reflect.GeneratedMethodAccessor88.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181]
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2562) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at com.sun.proxy.$Proxy34.fireListenerEvent(Unknown Source) [?:?]
at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:2431) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.metadata.Hive.loadPartitionInternal(Hive.java:1629) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1525) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1489) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:501) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2200) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1843) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1563) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1339) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1334) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:256) [hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hive.service.cli.operation.SQLOperation.access$600(SQLOperation.java:92) [hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:345) [hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_181]
at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_181]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) [hadoop-common-3.0.0-cdh6.2.1.jar:?]
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:357) [hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_181]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
2021-03-17 15:09:31,063 INFO hive.metastore: [HiveServer2-Background-Pool: Thread-183022]: Closed a connection to metastore, current connections: 9231
2021-03-17 15:09:31,063 INFO hive.metastore: [HiveServer2-Background-Pool: Thread-183022]: Trying to connect to metastore with URI thrift://dev-dw-dn01:9083
2021-03-17 15:09:31,065 INFO hive.metastore: [HiveServer2-Background-Pool: Thread-183022]: Opened a connection to metastore, current connections: 9232
2021-03-17 15:09:31,065 INFO hive.metastore: [HiveServer2-Background-Pool: Thread-183022]: Connected to metastore.
2021-03-17 15:09:31,196 ERROR org.apache.hadoop.hive.ql.Driver: [HiveServer2-Background-Pool: Thread-183022]: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask. org.apache.thrift.TApplicationException: Internal error processing fire_lis
tener_event2021-03-17 15:09:31,196 INFO org.apache.hadoop.hive.ql.Driver: [HiveServer2-Background-Pool: Thread-183022]: MapReduce Jobs Launched:
2021-03-17 15:09:31,197 INFO org.apache.hadoop.hive.ql.Driver: [HiveServer2-Background-Pool: Thread-183022]: Stage-Stage-1: Map: 1 Cumulative CPU: 2.56 sec HDFS Read: 5344 HDFS Write: 609 HDFS EC Read: 0 SUCCESS
2021-03-17 15:09:31,197 INFO org.apache.hadoop.hive.ql.Driver: [HiveServer2-Background-Pool: Thread-183022]: Total MapReduce CPU Time Spent: 2 seconds 560 msec
2021-03-17 15:09:31,197 INFO org.apache.hadoop.hive.ql.Driver: [HiveServer2-Background-Pool: Thread-183022]: Completed executing command(queryId=hive_20210317150913_9d734c54-f0cf-4dc7-9117-bc7f59c2cb61); Time taken: 17.758 seconds
2021-03-17 15:09:31,206 ERROR org.apache.hive.service.cli.operation.Operation: [HiveServer2-Background-Pool: Thread-183022]: Error running hive query:
org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask. org.apache.thrift.TApplicationException: Internal error processing fire_listener_event
at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:329) ~[hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:258) ~[hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hive.service.cli.operation.SQLOperation.access$600(SQLOperation.java:92) ~[hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:345) [hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_181]
at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_181]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) [hadoop-common-3.0.0-cdh6.2.1.jar:?]
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:357) [hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_181]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.thrift.TApplicationException: Internal error processing fire_listener_event
at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:2433) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.metadata.Hive.loadPartitionInternal(Hive.java:1629) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1525) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1489) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:501) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2200) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1843) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1563) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1339) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1334) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:256) ~[hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
... 11 more
Caused by: org.apache.thrift.TApplicationException: Internal error processing fire_listener_event
at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4836) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4823) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:2531) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at sun.reflect.GeneratedMethodAccessor88.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181]
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:154) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at com.sun.proxy.$Proxy34.fireListenerEvent(Unknown Source) ~[?:?]
at sun.reflect.GeneratedMethodAccessor88.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181]
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2562) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at com.sun.proxy.$Proxy34.fireListenerEvent(Unknown Source) ~[?:?]
at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:2431) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.metadata.Hive.loadPartitionInternal(Hive.java:1629) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1525) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1489) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:501) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2200) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1843) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1563) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1339) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1334) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:256) ~[hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
... 11 more
======================
if you are using "insert into" to insert data into new partitons of a partitoined table, then there are no problems. The corresponding metastore log:
2021-03-17 15:40:17,770 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_table : db=apollo_ods_jzfix tbl=test0317
2021-03-17 15:40:17,770 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_table : db=apollo_ods_jzfix tbl=test0317
2021-03-17 15:40:17,787 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_table : db=apollo_ods_jzfix tbl=test0317
2021-03-17 15:40:17,787 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_table : db=apollo_ods_jzfix tbl=test0317
2021-03-17 15:40:17,802 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5]
2021-03-17 15:40:17,802 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5]
2021-03-17 15:40:17,871 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_table : db=apollo_ods_jzfix tbl=test0317
2021-03-17 15:40:17,871 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_table : db=apollo_ods_jzfix tbl=test0317
2021-03-17 15:40:17,880 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_indexes : db=_dummy_database tbl=_dummy_table
2021-03-17 15:40:17,880 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_indexes : db=_dummy_database tbl=_dummy_table
2021-03-17 15:40:17,883 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_indexes : db=_dummy_database tbl=_dummy_table
2021-03-17 15:40:17,883 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_indexes : db=_dummy_database tbl=_dummy_table
2021-03-17 15:40:17,918 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_partitions_ps_with_auth : db=apollo_ods_jzfix tbl=test0317[5]
2021-03-17 15:40:17,918 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_partitions_ps_with_auth : db=apollo_ods_jzfix tbl=test0317[5]
2021-03-17 15:40:17,936 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5]
2021-03-17 15:40:17,936 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5]
2021-03-17 15:40:35,434 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: Cleaning up thread local RawStore...
2021-03-17 15:40:35,435 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=Cleaning up thread local RawStore...
2021-03-17 15:40:35,435 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: Done cleaning up thread local RawStore
2021-03-17 15:40:35,435 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=Done cleaning up thread local RawStore
2021-03-17 15:40:35,465 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_table : db=apollo_ods_jzfix tbl=test0317
2021-03-17 15:40:35,466 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_table : db=apollo_ods_jzfix tbl=test0317
2021-03-17 15:40:35,486 WARN org.apache.hadoop.hive.conf.HiveConf: [pool-9-thread-153]: HiveConf of name hive.server2.queue.access.check does not exist
2021-03-17 15:40:35,486 WARN org.apache.hadoop.hive.conf.HiveConf: [pool-9-thread-153]: HiveConf of name hive.server2.sessions.custom.queue.allowed does not exist
2021-03-17 15:40:35,486 WARN org.apache.hadoop.hive.conf.HiveConf: [pool-9-thread-153]: HiveConf of name hive.sentry.conf.url does not exist
2021-03-17 15:40:35,486 WARN org.apache.hadoop.hive.conf.HiveConf: [pool-9-thread-153]: HiveConf of name hive.server2.initialize.default.sessions does not exist
2021-03-17 15:40:35,486 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
2021-03-17 15:40:35,583 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: partition_name_has_valid_characters
2021-03-17 15:40:35,583 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=partition_name_has_valid_characters
2021-03-17 15:40:35,583 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_table : db=apollo_ods_jzfix tbl=test0317
2021-03-17 15:40:35,583 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_table : db=apollo_ods_jzfix tbl=test0317
2021-03-17 15:40:35,590 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5]
2021-03-17 15:40:35,590 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5]
2021-03-17 15:40:35,610 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: add_partition : db=apollo_ods_jzfix tbl=test0317
2021-03-17 15:40:35,610 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=add_partition : db=apollo_ods_jzfix tbl=test0317
2021-03-17 15:40:35,746 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5]
2021-03-17 15:40:35,746 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5]
2021-03-17 15:40:35,764 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: Cleaning up thread local RawStore...
2021-03-17 15:40:35,764 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=Cleaning up thread local RawStore...
2021-03-17 15:40:35,764 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: Done cleaning up thread local RawStore
2021-03-17 15:40:35,764 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=Done cleaning up thread local RawStore
2021-03-17 15:40:35,766 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_table : db=apollo_ods_jzfix tbl=test0317
2021-03-17 15:40:35,767 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_table : db=apollo_ods_jzfix tbl=test0317
2021-03-17 15:40:35,787 WARN org.apache.hadoop.hive.conf.HiveConf: [pool-9-thread-153]: HiveConf of name hive.server2.queue.access.check does not exist
2021-03-17 15:40:35,787 WARN org.apache.hadoop.hive.conf.HiveConf: [pool-9-thread-153]: HiveConf of name hive.server2.sessions.custom.queue.allowed does not exist
2021-03-17 15:40:35,787 WARN org.apache.hadoop.hive.conf.HiveConf: [pool-9-thread-153]: HiveConf of name hive.sentry.conf.url does not exist
2021-03-17 15:40:35,787 WARN org.apache.hadoop.hive.conf.HiveConf: [pool-9-thread-153]: HiveConf of name hive.server2.initialize.default.sessions does not exist
2021-03-17 15:40:35,787 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
2021-03-17 15:40:35,888 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_table : db=apollo_ods_jzfix tbl=test0317
2021-03-17 15:40:35,888 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_table : db=apollo_ods_jzfix tbl=test0317
2021-03-17 15:40:35,897 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5]
2021-03-17 15:40:35,897 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5]
2021-03-17 15:40:35,915 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: alter_partitions : db=apollo_ods_jzfix tbl=test0317
2021-03-17 15:40:35,915 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=alter_partitions : db=apollo_ods_jzfix tbl=test0317
2021-03-17 15:40:35,915 WARN org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics: [pool-9-thread-153]: Scope named api_alter_partitions is not closed, cannot be opened.
2021-03-17 15:40:35,915 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: New partition values:[5]
2021-03-17 15:40:35,929 WARN hive.log: [pool-9-thread-153]: Updating partition stats fast for: test0317
2021-03-17 15:40:35,942 WARN hive.log: [pool-9-thread-153]: Updated size to 519

keep striving!
1 ACCEPTED SOLUTION

avatar
Expert Contributor

This is a bug fixed by HIVE-15642. You can work around this issue by setting hive.metastore.dml.events to false.

 

View solution in original post

1 REPLY 1

avatar
Expert Contributor

This is a bug fixed by HIVE-15642. You can work around this issue by setting hive.metastore.dml.events to false.