Member since
05-08-2020
11
Posts
0
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1778 | 06-09-2021 09:58 PM | |
1787 | 06-09-2021 02:11 AM |
06-09-2021
09:58 PM
A final update on solution 3: 1. when you use beeline to connect to a hiveserver2 instance (let's name this hiveserver2 instance hiveserver2-instance1) and submit statement like "select udfTest(20210101) from testTableA", if the udf itself contains java codes to connect to the same hiveserver2 instance hiveserver2-instance1 and executes any statement, it will make the hiveserver2-instance1 not function properly; 2. when you use beeline to connect to a hiveserver2 instance( let's name this hiveserver2 instance hiveserver2-instance1) and submit statement like "select udfTest(20210101) from testTableA", if the udf itself contains java codes to connect to the another hiveserver2 instance like hiveserver2-instance2 and executes any statement, then both hiveserver2-instance1 and hiveserver2-instance2 will function properly; 3. when you use hive service --cli to submit statement like "select udfTest(20210101) from testTableA", and the udf calls other hiveserver2 instances like hiveserver2-instance1, then both hive cli and the hiveserver2 instance will funciton properly; 4. when you use beeline to connect to a hiveserver2 instance( let's name this hiveserver2 instance hiveserver2-instance1) and submit statement like "select udfTest(user_code) from testTableA", if the udf itself contains java codes to connect to the same hiveserver2 instance like hiveserver2-instance1 and executes any statement, then hiveserver2-instance1 will function properly. The root cause is whether you are using the same hiveserver2 instance as both hive sql client and hive sql server: this is not the case for scenario 2 and 3, where you use different hiveserver2 or hive service --cli; this is also not the case for scenario 4: in this case a mr/tez/spark job is generated and scheduled to run in a yarn container, which acts as the sql client and connects back to the hiveserver2 to submit sqls; but this is the case for scenario 1, as when hiveserver2 analyze and compile the sql statement "select udfTest(20210101) from testTableA" , it finds that no map task need be generated (as we are using constant 20210101 here, no table records need be fetched), so as part of the analyze and compile process, it connects to itself and tries to execute the sql call itself, which makes it both the sql client and sql server. So to sum up, use sql calls against hiveserver2 inside udf is not a goot practice.
... View more
06-09-2021
02:11 AM
A self update on solution 3: I tested this solution in my CDH6.2 environment, by using beeline to connect to hiveserver2 and issue sql query which includes udf, (the udf itself have codes to connect to hiveserver2 and issue sql queries), it turns out this kind of udf usage makes the hiveserver2 service not functioning properly: the udf call itself hangs there for a long time and no result returned (i think it will hang there forever - until we restart the hiveserver2 service), meanwhile other beeline clients can connect to hiveserver2 successfully but no sql statements can be executed successfully, even a simple "show databases" command will hang there for a long time with no result returned (i think it will hang there forever - until we restart the hiveserver2 service). I think this kind of udf usage, which have sql queries again hiveserver2 inside udf, WILL NOT function properly: because when we submit udf call to hiveserver2, the udf itself is first analyzed by hiveserver2, which means the sql call against hiveserver2 inside the udf codes are also executed by hiveserver2 to connect to hiveserver2 and issues sql queries again itself,which makes the server side hiveserver2 also a client.
... View more
06-08-2021
03:23 AM
Hi guys, How to get user specified configuration data in hive udf? what is your solution? i think there are basically three ways to achieve this goal: 1. use a hdfs file to store the configuration data in xml/json/txt format, and then read the hdfs file in udf, users need to change the hdfs file when they want to change specific configuration parms; 2. use a local xml/json/txt file and pack it into the udf jar, then read the local file in udf, users need to change the local file and repack it into the udf jar when they want to change specific configuration parms; 3. use a hive table to store the configuration data, then read the table using hive sql dml statements in the udf (I know this sounds strange, normally we don't issue sql queries to hiveserver2 in udf, but this should also be possble), users can use hive sql dml to change specific configuration parms when they need to. 4. use a hive table to store the configuration data, then read the underneath hdfs file to get configuration details in the udf (of course, you need the configuration hive table to be in text format like csv ), users can use hive sql dml to change specific configuration parms when they need to. What is your way of achieving this? Has anyway used method 3 above?
... View more
Labels:
- Labels:
-
Apache Hive
05-27-2021
02:04 AM
@VidyaSargur can you please help on this?
... View more
05-27-2021
12:19 AM
I am not able to change my company name in my profile, as below shows. I also noticed that the system is now showing that "Partner (No Longer Active)". not sure how to handle above two issue.
... View more
Labels:
03-21-2021
07:13 PM
Issue summary:
In cdh6.2.1,with kerberos and sentry enabled, we are getting issues when using statement "insert overwrite" to insert data into new partitions of parttioned table, the sql statement, and detailed metastore and hiveserver2 logs are attached.
Note: 1. Event though errors are throwed (and show partions xxx will not dispaly the new partition), the underneath hdfs directory and files for the corresponding partition are created successfully. 2. after errors throwed for the "insert overwrite" statement, we can use msck repair tablexxx to fix the hive metastore data for the talbe, and after that, we can use "show partitions" to dispaly the new created partition successfully, and use "select xx" to query the new inserted partition data successfully: 3. This happens for both static mode partitioning and dynamic mode partitoning;( as long as you are inserting data to new partitions) 4. if you are using insert overwrite to insert data into an existing partion (the partition can be either empty or not empty, this does matter), there will not be any issue; 5. if you are using insert into to insert data, then there are no problems; 6. if you are using non partioned table, then both insert overwrite and insert into have no problem;
currently, we are manually creating needed partitions before executing“insert overwite”to overcome this (like alter table test0317 add partition (ptdate=10);). But this is not a logn-term solution for this. Please help.
=================== the sql statements we used:
use apollo_ods_jzfix;
create table test0317 (user_code decimal(10), account decimal(19) ) partitioned by(ptdate string) stored as parquet;
set hive.exec.dynamic.partition.mode=nonstrict;
insert overwrite table test0317 partition(ptdate = "10") select * from( select 2 as user_code, 3 as account)a;
insert overwrite table test0317 partition(ptdate) select * from ( select 1 as user_code,3 as account,"8" as ptdate union all select 1 as user_code,3 as account,"9" as ptdate ) a;
========================= The client side error log:(beeline) INFO : Loading data to table apollo_ods_jzfix.test0317 partition (ptdate=1) from hdfs://dev-dw-nn01:8020/user/hive/warehouse/apollo_ods_jzfix.db/test0317/ptdate=1/.hive-staging_hive_2021-03-17_15-09-13_232_1543365768355672834-7333/-ext-10000 ERROR : FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask. org.apache.thrift.TApplicationException: Internal error processing fire_listener_event INFO : MapReduce Jobs Launched: INFO : Stage-Stage-1: Map: 1 Cumulative CPU: 2.56 sec HDFS Read: 5344 HDFS Write: 609 HDFS EC Read: 0 SUCCESS INFO : Total MapReduce CPU Time Spent: 2 seconds 560 msec INFO : Completed executing command(queryId=hive_20210317150913_9d734c54-f0cf-4dc7-9117-bc7f59c2cb61); Time taken: 17.758 seconds Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask. org.apache.thrift.TApplicationException: Internal error processing fire_listener_event (state=08S01,code=1) 0: jdbc:hive2://dev-dw-nn01:10000/>
=============== the hive metastore error log: 2021-03-17 15:09:30,039 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-122]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_table : db=apollo_ods_jzfix tbl=test0317 2021-03-17 15:09:30,044 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-122]: 121: get_partition : db=apollo_ods_jzfix tbl=test0317[1] 2021-03-17 15:09:30,044 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-122]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_partition : db=apollo_ods_jzfix tbl=test0317[1] 2021-03-17 15:09:30,053 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-9-thread-122]: NoSuchObjectException(message:partition values=[1]) at org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:2003) at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101) at com.sun.proxy.$Proxy26.getPartition(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:3553) at org.apache.hadoop.hive.metastore.events.InsertEvent.<init>(InsertEvent.java:62) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.fire_listener_event(HiveMetaStore.java:6737) at sun.reflect.GeneratedMethodAccessor100.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99) at com.sun.proxy.$Proxy28.fire_listener_event(Unknown Source) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$fire_listener_event.getResult(ThriftHiveMetastore.java:14208) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$fire_listener_event.getResult(ThriftHiveMetastore.java:14193) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:594) at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:589) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:589) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)
2021-03-17 15:09:30,054 ERROR org.apache.thrift.ProcessFunction: [pool-9-thread-122]: Internal error processing fire_listener_event org.apache.hadoop.hive.metastore.api.NoSuchObjectException: partition values=[1] at org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:2003) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source) ~[?:?] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181] at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at com.sun.proxy.$Proxy26.getPartition(Unknown Source) ~[?:?] at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:3553) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.metastore.events.InsertEvent.<init>(InsertEvent.java:62) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.fire_listener_event(HiveMetaStore.java:6737) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at sun.reflect.GeneratedMethodAccessor100.invoke(Unknown Source) ~[?:?] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181] at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at com.sun.proxy.$Proxy28.fire_listener_event(Unknown Source) ~[?:?] at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$fire_listener_event.getResult(ThriftHiveMetastore.java:14208) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$fire_listener_event.getResult(ThriftHiveMetastore.java:14193) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:594) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:589) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_181] at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_181] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) [hadoop-common-3.0.0-cdh6.2.1.jar:?] at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:589) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181] ================= the hiveserver2 error log: 2021-03-17 15:09:30,016 INFO org.apache.hadoop.hive.ql.exec.MoveTask: [HiveServer2-Background-Pool: Thread-183022]: Partition is: {ptdate=1} 2021-03-17 15:09:30,033 INFO org.apache.hadoop.hive.common.FileUtils: [HiveServer2-Background-Pool: Thread-183022]: Creating directory if it doesn't exist: hdfs://dev-dw-nn01:8020/user/hive/warehouse/apollo_ods_jzfix.db/test0317/ptdate=1 2021-03-17 15:09:30,057 WARN org.apache.hadoop.hive.metastore.RetryingMetaStoreClient: [HiveServer2-Background-Pool: Thread-183022]: MetaStoreClient lost connection. Attempting to reconnect. org.apache.thrift.TApplicationException: Internal error processing fire_listener_event at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4836) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4823) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:2531) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at sun.reflect.GeneratedMethodAccessor88.invoke(Unknown Source) ~[?:?] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181] at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:154) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at com.sun.proxy.$Proxy34.fireListenerEvent(Unknown Source) [?:?] at sun.reflect.GeneratedMethodAccessor88.invoke(Unknown Source) ~[?:?] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181] at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2562) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at com.sun.proxy.$Proxy34.fireListenerEvent(Unknown Source) [?:?] at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:2431) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.metadata.Hive.loadPartitionInternal(Hive.java:1629) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1525) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1489) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:501) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2200) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1843) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1563) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1339) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1334) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:256) [hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hive.service.cli.operation.SQLOperation.access$600(SQLOperation.java:92) [hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:345) [hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_181] at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_181] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) [hadoop-common-3.0.0-cdh6.2.1.jar:?] at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:357) [hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_181] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_181] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181] 2021-03-17 15:09:31,063 INFO hive.metastore: [HiveServer2-Background-Pool: Thread-183022]: Closed a connection to metastore, current connections: 9231 2021-03-17 15:09:31,063 INFO hive.metastore: [HiveServer2-Background-Pool: Thread-183022]: Trying to connect to metastore with URI thrift://dev-dw-dn01:9083 2021-03-17 15:09:31,065 INFO hive.metastore: [HiveServer2-Background-Pool: Thread-183022]: Opened a connection to metastore, current connections: 9232 2021-03-17 15:09:31,065 INFO hive.metastore: [HiveServer2-Background-Pool: Thread-183022]: Connected to metastore. 2021-03-17 15:09:31,196 ERROR org.apache.hadoop.hive.ql.Driver: [HiveServer2-Background-Pool: Thread-183022]: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask. org.apache.thrift.TApplicationException: Internal error processing fire_lis tener_event2021-03-17 15:09:31,196 INFO org.apache.hadoop.hive.ql.Driver: [HiveServer2-Background-Pool: Thread-183022]: MapReduce Jobs Launched: 2021-03-17 15:09:31,197 INFO org.apache.hadoop.hive.ql.Driver: [HiveServer2-Background-Pool: Thread-183022]: Stage-Stage-1: Map: 1 Cumulative CPU: 2.56 sec HDFS Read: 5344 HDFS Write: 609 HDFS EC Read: 0 SUCCESS 2021-03-17 15:09:31,197 INFO org.apache.hadoop.hive.ql.Driver: [HiveServer2-Background-Pool: Thread-183022]: Total MapReduce CPU Time Spent: 2 seconds 560 msec 2021-03-17 15:09:31,197 INFO org.apache.hadoop.hive.ql.Driver: [HiveServer2-Background-Pool: Thread-183022]: Completed executing command(queryId=hive_20210317150913_9d734c54-f0cf-4dc7-9117-bc7f59c2cb61); Time taken: 17.758 seconds 2021-03-17 15:09:31,206 ERROR org.apache.hive.service.cli.operation.Operation: [HiveServer2-Background-Pool: Thread-183022]: Error running hive query: org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask. org.apache.thrift.TApplicationException: Internal error processing fire_listener_event at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:329) ~[hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:258) ~[hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hive.service.cli.operation.SQLOperation.access$600(SQLOperation.java:92) ~[hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:345) [hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_181] at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_181] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) [hadoop-common-3.0.0-cdh6.2.1.jar:?] at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:357) [hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_181] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_181] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181] Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.thrift.TApplicationException: Internal error processing fire_listener_event at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:2433) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.metadata.Hive.loadPartitionInternal(Hive.java:1629) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1525) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1489) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:501) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2200) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1843) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1563) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1339) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1334) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:256) ~[hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] ... 11 more Caused by: org.apache.thrift.TApplicationException: Internal error processing fire_listener_event at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4836) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4823) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:2531) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at sun.reflect.GeneratedMethodAccessor88.invoke(Unknown Source) ~[?:?] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181] at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:154) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at com.sun.proxy.$Proxy34.fireListenerEvent(Unknown Source) ~[?:?] at sun.reflect.GeneratedMethodAccessor88.invoke(Unknown Source) ~[?:?] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181] at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2562) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at com.sun.proxy.$Proxy34.fireListenerEvent(Unknown Source) ~[?:?] at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:2431) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.metadata.Hive.loadPartitionInternal(Hive.java:1629) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1525) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1489) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:501) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2200) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1843) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1563) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1339) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1334) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:256) ~[hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] ... 11 more ====================== if you are using "insert into" to insert data into new partitons of a partitoined table, then there are no problems. The corresponding metastore log: 2021-03-17 15:40:17,770 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_table : db=apollo_ods_jzfix tbl=test0317 2021-03-17 15:40:17,770 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_table : db=apollo_ods_jzfix tbl=test0317 2021-03-17 15:40:17,787 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_table : db=apollo_ods_jzfix tbl=test0317 2021-03-17 15:40:17,787 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_table : db=apollo_ods_jzfix tbl=test0317 2021-03-17 15:40:17,802 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5] 2021-03-17 15:40:17,802 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5] 2021-03-17 15:40:17,871 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_table : db=apollo_ods_jzfix tbl=test0317 2021-03-17 15:40:17,871 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_table : db=apollo_ods_jzfix tbl=test0317 2021-03-17 15:40:17,880 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_indexes : db=_dummy_database tbl=_dummy_table 2021-03-17 15:40:17,880 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_indexes : db=_dummy_database tbl=_dummy_table 2021-03-17 15:40:17,883 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_indexes : db=_dummy_database tbl=_dummy_table 2021-03-17 15:40:17,883 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_indexes : db=_dummy_database tbl=_dummy_table 2021-03-17 15:40:17,918 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_partitions_ps_with_auth : db=apollo_ods_jzfix tbl=test0317[5] 2021-03-17 15:40:17,918 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_partitions_ps_with_auth : db=apollo_ods_jzfix tbl=test0317[5] 2021-03-17 15:40:17,936 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5] 2021-03-17 15:40:17,936 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5] 2021-03-17 15:40:35,434 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: Cleaning up thread local RawStore... 2021-03-17 15:40:35,435 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=Cleaning up thread local RawStore... 2021-03-17 15:40:35,435 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: Done cleaning up thread local RawStore 2021-03-17 15:40:35,435 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=Done cleaning up thread local RawStore 2021-03-17 15:40:35,465 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_table : db=apollo_ods_jzfix tbl=test0317 2021-03-17 15:40:35,466 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_table : db=apollo_ods_jzfix tbl=test0317 2021-03-17 15:40:35,486 WARN org.apache.hadoop.hive.conf.HiveConf: [pool-9-thread-153]: HiveConf of name hive.server2.queue.access.check does not exist 2021-03-17 15:40:35,486 WARN org.apache.hadoop.hive.conf.HiveConf: [pool-9-thread-153]: HiveConf of name hive.server2.sessions.custom.queue.allowed does not exist 2021-03-17 15:40:35,486 WARN org.apache.hadoop.hive.conf.HiveConf: [pool-9-thread-153]: HiveConf of name hive.sentry.conf.url does not exist 2021-03-17 15:40:35,486 WARN org.apache.hadoop.hive.conf.HiveConf: [pool-9-thread-153]: HiveConf of name hive.server2.initialize.default.sessions does not exist 2021-03-17 15:40:35,486 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2021-03-17 15:40:35,583 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: partition_name_has_valid_characters 2021-03-17 15:40:35,583 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=partition_name_has_valid_characters 2021-03-17 15:40:35,583 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_table : db=apollo_ods_jzfix tbl=test0317 2021-03-17 15:40:35,583 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_table : db=apollo_ods_jzfix tbl=test0317 2021-03-17 15:40:35,590 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5] 2021-03-17 15:40:35,590 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5] 2021-03-17 15:40:35,610 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: add_partition : db=apollo_ods_jzfix tbl=test0317 2021-03-17 15:40:35,610 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=add_partition : db=apollo_ods_jzfix tbl=test0317 2021-03-17 15:40:35,746 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5] 2021-03-17 15:40:35,746 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5] 2021-03-17 15:40:35,764 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: Cleaning up thread local RawStore... 2021-03-17 15:40:35,764 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=Cleaning up thread local RawStore... 2021-03-17 15:40:35,764 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: Done cleaning up thread local RawStore 2021-03-17 15:40:35,764 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=Done cleaning up thread local RawStore 2021-03-17 15:40:35,766 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_table : db=apollo_ods_jzfix tbl=test0317 2021-03-17 15:40:35,767 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_table : db=apollo_ods_jzfix tbl=test0317 2021-03-17 15:40:35,787 WARN org.apache.hadoop.hive.conf.HiveConf: [pool-9-thread-153]: HiveConf of name hive.server2.queue.access.check does not exist 2021-03-17 15:40:35,787 WARN org.apache.hadoop.hive.conf.HiveConf: [pool-9-thread-153]: HiveConf of name hive.server2.sessions.custom.queue.allowed does not exist 2021-03-17 15:40:35,787 WARN org.apache.hadoop.hive.conf.HiveConf: [pool-9-thread-153]: HiveConf of name hive.sentry.conf.url does not exist 2021-03-17 15:40:35,787 WARN org.apache.hadoop.hive.conf.HiveConf: [pool-9-thread-153]: HiveConf of name hive.server2.initialize.default.sessions does not exist 2021-03-17 15:40:35,787 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2021-03-17 15:40:35,888 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_table : db=apollo_ods_jzfix tbl=test0317 2021-03-17 15:40:35,888 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_table : db=apollo_ods_jzfix tbl=test0317 2021-03-17 15:40:35,897 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5] 2021-03-17 15:40:35,897 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=get_partition_with_auth : db=apollo_ods_jzfix tbl=test0317[5] 2021-03-17 15:40:35,915 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: 152: alter_partitions : db=apollo_ods_jzfix tbl=test0317 2021-03-17 15:40:35,915 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-153]: ugi=hive/dev-dw-nn01@GYDW.COM ip=10.2.91.100 cmd=alter_partitions : db=apollo_ods_jzfix tbl=test0317 2021-03-17 15:40:35,915 WARN org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics: [pool-9-thread-153]: Scope named api_alter_partitions is not closed, cannot be opened. 2021-03-17 15:40:35,915 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-9-thread-153]: New partition values:[5] 2021-03-17 15:40:35,929 WARN hive.log: [pool-9-thread-153]: Updating partition stats fast for: test0317 2021-03-17 15:40:35,942 WARN hive.log: [pool-9-thread-153]: Updated size to 519
... View more
Labels:
12-11-2020
05:50 AM
The titled SocketTimeoutException occurs when the thrift-client in hiveConnection object is in the process of actively reading sql results from hive server2 (thrift server), and is not able to receive anything until the TSocket's time out occurs. You can check the source code from:HiveConnection.setupLoginTimeout and HiveAuthFactory.getSocketTransport. So you need to either tuning hiveserver2, or increase the TSocket's timeout setting. And for now, the only way to increase Tsocket's time out setting is via: DriverManager.setLoginTimeout() you can check below jira for more information: https://issues.apache.org/jira/browse/HIVE-22196 https://issues.apache.org/jira/browse/HIVE-6679 https://issues.apache.org/jira/browse/HIVE-12371
... View more
11-23-2020
02:06 AM
nope. I didn't use any custom/auxiliary jars. I am not very sure about how jars are loaded when using spark execution engine for hive, but i do notice that class path are tailored by /opt/cloudera/parcels/CDH/lib/hive/bin/hive, as below shows, to add spark related jars, while this have nothing to do with antlr-runtime-xx.jar or antlr4-runtime-xx.jar: (so i am confused why this happens for hive on spark while not for hive on mr?) # add Spark jars to the classpath if [[ -n "$SPARK_HOME" ]] then CLASSPATH=${CLASSPATH}:${SPARK_HOME}/jars/spark-core*.jar CLASSPATH=${CLASSPATH}:${SPARK_HOME}/jars/spark-unsafe*.jar CLASSPATH=${CLASSPATH}:${SPARK_HOME}/jars/scala-library*.jar fi
... View more
11-22-2020
05:55 PM
just a add on: the underneath class that caused the problem, which is org/antlr/runtime/tree/CommonTree, can't be found in antlr4-runtime-xxx.jar, but can be found in antlr-runtime-xxx.jar, as below screen shows: So we copied antlr-runtime-xxx.jar from the standard lib of hive into the standard jar lib of spark, our issue seems to be resolved by this.
... View more
11-16-2020
11:31 PM
In CDH6.X, when running hive sql using spark execution engine, sometimes i will encounter below error, while this doesn't happen in CDH5.X:
scheduler.TasksetManager: Lost task 0.1 in stage 22.0(TID 37, node03, executor 1): UnknownReason util.Utils: uncaught exception in thread task-result-getter-1 java.lang.NoClassDefFoundError: org/antlr/runtime/tree/CommonTree at java.lang.ClassLoader.defineClass1(Native.Method)
If i switch to the MR execution engine, the above error is gone.
This seems to be related to the loading of classes in antlr-runtime-xxx.jar and antlr4-runtime-xx.jar under /opt/cloudera/parcels/CDH/lib/hive/lib.
... View more
Labels: