DEBUG : Encoding valid txns info 219001:219001:146687:146697:146737:146757:146787:146807:146867:146897:166938:167107:167127:174557:174567:174577:174597:174627:174637:174677:174747:177917:177977:218895:218985:218994:218997:218998:218999:219000 txnid:219001 INFO : Setting caller context to query id hive_20190110084506_7dbcdcf2-c4e8-41c7-8221-96ccc0f43005 INFO : Executing command(queryId=hive_20190110084506_7dbcdcf2-c4e8-41c7-8221-96ccc0f43005): delete from cudo_test where id = 222222222 INFO : Query ID = hive_20190110084506_7dbcdcf2-c4e8-41c7-8221-96ccc0f43005 INFO : Total jobs = 1 INFO : Launching Job 1 out of 1 INFO : Starting task [Stage-1:MAPRED] in serial mode INFO : Session is already open DEBUG : Adding local resource: scheme: "hdfs" host: "" port: -1 file: "/tmp/hive/hive/_tez_session_dir/ea848395-44ec-4400-8409-3398d7a1a630/hive-hcatalog-core.jar" INFO : Dag name: delete from cudo_test where id = 222222222(Stage-1) DEBUG : DagInfo: {"context":"Hive","description":"delete from cudo_test where id = 222222222"} DEBUG : Setting Tez DAG access for queryId=hive_20190110084506_7dbcdcf2-c4e8-41c7-8221-96ccc0f43005 with viewAclString=*, modifyStr=zdo_batch,hive INFO : Tez session was closed. Reopening... INFO : Session re-established. INFO : Status: Running (Executing on YARN cluster with App id application_1547024849051_3852) INFO : Status: DAG finished successfully in 2.33 seconds INFO : INFO : Query Execution Summary INFO : ---------------------------------------------------------------------------------------------- INFO : OPERATION DURATION INFO : ---------------------------------------------------------------------------------------------- INFO : Compile Query 0.13s INFO : Prepare Plan 0.42s INFO : Submit Plan 25.18s INFO : Start DAG 1.01s INFO : Run DAG 2.33s INFO : ---------------------------------------------------------------------------------------------- INFO : INFO : Task Execution Summary INFO : ---------------------------------------------------------------------------------------------- INFO : VERTICES DURATION(ms) CPU_TIME(ms) GC_TIME(ms) INPUT_RECORDS OUTPUT_RECORDS INFO : ---------------------------------------------------------------------------------------------- INFO : Map 1 1008.00 0 0 3 1 INFO : Reducer 2 280.00 0 0 1 0 INFO : ---------------------------------------------------------------------------------------------- INFO : INFO : org.apache.tez.common.counters.DAGCounter: INFO : NUM_SUCCEEDED_TASKS: 4 INFO : TOTAL_LAUNCHED_TASKS: 4 INFO : DATA_LOCAL_TASKS: 1 INFO : AM_CPU_MILLISECONDS: 3140 INFO : AM_GC_TIME_MILLIS: 0 INFO : File System Counters: INFO : FILE_BYTES_READ: 0 INFO : FILE_BYTES_WRITTEN: 165 INFO : FILE_READ_OPS: 0 INFO : FILE_LARGE_READ_OPS: 0 INFO : FILE_WRITE_OPS: 0 INFO : HDFS_BYTES_READ: 1240 INFO : HDFS_BYTES_WRITTEN: 724 INFO : HDFS_READ_OPS: 9 INFO : HDFS_LARGE_READ_OPS: 0 INFO : HDFS_WRITE_OPS: 6 INFO : org.apache.tez.common.counters.TaskCounter: INFO : REDUCE_INPUT_GROUPS: 1 INFO : REDUCE_INPUT_RECORDS: 1 INFO : COMBINE_INPUT_RECORDS: 0 INFO : SPILLED_RECORDS: 2 INFO : NUM_SHUFFLED_INPUTS: 1 INFO : NUM_SKIPPED_INPUTS: 2 INFO : NUM_FAILED_SHUFFLE_INPUTS: 0 INFO : MERGED_MAP_OUTPUTS: 1 INFO : INPUT_RECORDS_PROCESSED: 3 INFO : INPUT_SPLIT_LENGTH_BYTES: 450 INFO : OUTPUT_RECORDS: 1 INFO : OUTPUT_LARGE_RECORDS: 0 INFO : OUTPUT_BYTES: 25 INFO : OUTPUT_BYTES_WITH_OVERHEAD: 45 INFO : OUTPUT_BYTES_PHYSICAL: 85 INFO : ADDITIONAL_SPILLS_BYTES_WRITTEN: 0 INFO : ADDITIONAL_SPILLS_BYTES_READ: 45 INFO : ADDITIONAL_SPILL_COUNT: 0 INFO : SHUFFLE_CHUNK_COUNT: 1 INFO : SHUFFLE_BYTES: 45 INFO : SHUFFLE_BYTES_DECOMPRESSED: 33 INFO : SHUFFLE_BYTES_TO_MEM: 0 INFO : SHUFFLE_BYTES_TO_DISK: 0 INFO : SHUFFLE_BYTES_DISK_DIRECT: 45 INFO : NUM_MEM_TO_DISK_MERGES: 0 INFO : NUM_DISK_TO_DISK_MERGES: 0 INFO : SHUFFLE_PHASE_TIME: 69 INFO : MERGE_PHASE_TIME: 71 INFO : FIRST_EVENT_RECEIVED: 66 INFO : LAST_EVENT_RECEIVED: 66 INFO : HIVE: INFO : CREATED_FILES: 1 INFO : DESERIALIZE_ERRORS: 0 INFO : RECORDS_IN_Map_1: 3 INFO : RECORDS_OUT_1_zdo.cudo_test: 1 INFO : RECORDS_OUT_INTERMEDIATE_Map_1: 1 INFO : Shuffle Errors: INFO : BAD_ID: 0 INFO : CONNECTION: 0 INFO : IO_ERROR: 0 INFO : WRONG_LENGTH: 0 INFO : WRONG_MAP: 0 INFO : WRONG_REDUCE: 0 INFO : Shuffle Errors_Reducer_2_INPUT_Map_1: INFO : BAD_ID: 0 INFO : CONNECTION: 0 INFO : IO_ERROR: 0 INFO : WRONG_LENGTH: 0 INFO : WRONG_MAP: 0 INFO : WRONG_REDUCE: 0 INFO : TaskCounter_Map_1_INPUT_cudo_test: INFO : INPUT_RECORDS_PROCESSED: 3 INFO : INPUT_SPLIT_LENGTH_BYTES: 450 INFO : TaskCounter_Map_1_OUTPUT_Reducer_2: INFO : ADDITIONAL_SPILLS_BYTES_READ: 0 INFO : ADDITIONAL_SPILLS_BYTES_WRITTEN: 0 INFO : ADDITIONAL_SPILL_COUNT: 0 INFO : OUTPUT_BYTES: 25 INFO : OUTPUT_BYTES_PHYSICAL: 85 INFO : OUTPUT_BYTES_WITH_OVERHEAD: 45 INFO : OUTPUT_LARGE_RECORDS: 0 INFO : OUTPUT_RECORDS: 1 INFO : SHUFFLE_CHUNK_COUNT: 1 INFO : SPILLED_RECORDS: 1 INFO : TaskCounter_Reducer_2_INPUT_Map_1: INFO : ADDITIONAL_SPILLS_BYTES_READ: 45 INFO : ADDITIONAL_SPILLS_BYTES_WRITTEN: 0 INFO : COMBINE_INPUT_RECORDS: 0 INFO : FIRST_EVENT_RECEIVED: 66 INFO : LAST_EVENT_RECEIVED: 66 INFO : MERGED_MAP_OUTPUTS: 1 INFO : MERGE_PHASE_TIME: 71 INFO : NUM_DISK_TO_DISK_MERGES: 0 INFO : NUM_FAILED_SHUFFLE_INPUTS: 0 INFO : NUM_MEM_TO_DISK_MERGES: 0 INFO : NUM_SHUFFLED_INPUTS: 1 INFO : NUM_SKIPPED_INPUTS: 2 INFO : REDUCE_INPUT_GROUPS: 1 INFO : REDUCE_INPUT_RECORDS: 1 INFO : SHUFFLE_BYTES: 45 INFO : SHUFFLE_BYTES_DECOMPRESSED: 33 INFO : SHUFFLE_BYTES_DISK_DIRECT: 45 INFO : SHUFFLE_BYTES_TO_DISK: 0 INFO : SHUFFLE_BYTES_TO_MEM: 0 INFO : SHUFFLE_PHASE_TIME: 69 INFO : SPILLED_RECORDS: 1 INFO : TaskCounter_Reducer_2_OUTPUT_out_Reducer_2: INFO : OUTPUT_RECORDS: 0 INFO : Starting task [Stage-2:DEPENDENCY_COLLECTION] in serial mode INFO : Starting task [Stage-0:MOVE] in serial mode INFO : Loading data to table zdo.cudo_test from hdfs:///apps/hive/warehouse/zdo.db/cudo_test/.hive-staging_hive_2019-01-10_08-45-06_812_4048640062473008600-14/-ext-10000 INFO : Starting task [Stage-3:STATS] in serial mode -------------------------------------------------------------------------------- VERTICES MODE STATUS TOTAL COMPLETED RUNNING PENDING FAILED -------------------------------------------------------------------------------- Map 1 .......... llap SUCCEEDED 1 1 0 0 0 Reducer 2 ...... llap SUCCEEDED 3 3 0 0 0 -------------------------------------------------------------------------------- VERTICES: 02/02 [==========================>>] 100% ELAPSED TIME: 5.59 s -------------------------------------------------------------------------------- INFO : Table zdo.cudo_test stats: [numFiles=7, numRows=0, totalSize=4624, rawDataSize=0] INFO : Resetting the caller context to HIVE_SSN_ID:e3e6b6d3-035f-4482-bd71-f642d5647001 INFO : Completed executing command(queryId=hive_20190110084506_7dbcdcf2-c4e8-41c7-8221-96ccc0f43005); Time taken: 31.897 seconds INFO : OK ERROR : FAILED: Hive Internal Error: org.apache.hadoop.hive.ql.lockmgr.LockException(Error communicating with the metastore) org.apache.hadoop.hive.ql.lockmgr.LockException: Error communicating with the metastore at org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.commitTxn(DbTxnManager.java:407) at org.apache.hadoop.hive.ql.Driver.releaseLocksAndCommitOrRollback(Driver.java:1152) at org.apache.hadoop.hive.ql.Driver.releaseLocksAndCommitOrRollback(Driver.java:1123) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1423) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1211) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1204) at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:242) at org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91) at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:336) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869) at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:350) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: MetaException(message:Unable to update transaction database com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'hive.WRITE_SET' doesn't exist at sun.reflect.GeneratedConstructorAccessor123.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at com.mysql.jdbc.Util.handleNewInstance(Util.java:404) at com.mysql.jdbc.Util.getInstance(Util.java:387) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:941) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3870) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3806) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2470) at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2617) at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2546) at com.mysql.jdbc.StatementImpl.executeUpdateInternal(StatementImpl.java:1541) at com.mysql.jdbc.StatementImpl.executeLargeUpdate(StatementImpl.java:2605) at com.mysql.jdbc.StatementImpl.executeUpdate(StatementImpl.java:1469) at com.jolbox.bonecp.StatementHandle.executeUpdate(StatementHandle.java:497) at org.apache.hadoop.hive.metastore.txn.TxnHandler.commitTxn(TxnHandler.java:707) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.commit_txn(HiveMetaStore.java:6528) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) at com.sun.proxy.$Proxy32.commit_txn(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.commitTxn(HiveMetaStoreClient.java:2034) at sun.reflect.GeneratedMethodAccessor140.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:173) at com.sun.proxy.$Proxy33.commitTxn(Unknown Source) at sun.reflect.GeneratedMethodAccessor140.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2231) at com.sun.proxy.$Proxy33.commitTxn(Unknown Source) at org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.commitTxn(DbTxnManager.java:398) at org.apache.hadoop.hive.ql.Driver.releaseLocksAndCommitOrRollback(Driver.java:1152) at org.apache.hadoop.hive.ql.Driver.releaseLocksAndCommitOrRollback(Driver.java:1123) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1423) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1211) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1204) at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:242) at org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91) at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:336) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869) at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:350) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) ) at org.apache.hadoop.hive.metastore.txn.TxnHandler.commitTxn(TxnHandler.java:801) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.commit_txn(HiveMetaStore.java:6528) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) at com.sun.proxy.$Proxy32.commit_txn(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.commitTxn(HiveMetaStoreClient.java:2034) at sun.reflect.GeneratedMethodAccessor140.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:173) at com.sun.proxy.$Proxy33.commitTxn(Unknown Source) at sun.reflect.GeneratedMethodAccessor140.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2231) at com.sun.proxy.$Proxy33.commitTxn(Unknown Source) at org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.commitTxn(DbTxnManager.java:398) ... 19 more DEBUG : Shutting down query delete from cudo_test where id = 222222222 Error: Attempt to unlock lock lockid:586779associated with an open transaction, Unlocking locks associated with transaction not permitted. lockid:586779 intLockId:1 txnid:219001 db:zdo table:cudo_test partition:null state:ACQUIRED type:SHARED_WRITE (state=,code=0)