Created 12-26-2018 02:45 AM
Created 12-26-2018 06:20 AM
Seems your tables are not partitions. When you try to “INSERT OVERWRITE” to a partition of an external table under existing HDFS directory, depending on whether the partition definition already exists in the HIVE metastore or not, Hive will behave differently:
Reference here
Please accept the answer you found most useful.
Created 12-26-2018 07:04 AM
My table is a normal table. When insert overwrite, I found that I would put the old data under the HDFS directory into a folder such as base_0000003. Why not put the old data into the HDFS recycling station, which I can not understand.
Created 12-26-2018 07:36 AM
Please can you share output of below command. It is hive 3.x or hive 2.x ?
DESCRIBE FORMATTED <table_name>
Created on 12-26-2018 07:42 AM - edited 08-17-2019 03:20 PM
I use hive 3.0,which is the information I provided
Created 12-26-2018 07:48 AM
@Jack
From your above output I can see it's MANAGED_TABLE. If the table has TBLPROPERTIES ("auto.purge"="true") the previous data of the table is not moved to Trash when INSERT OVERWRITE query is run against the table. This functionality is applicable only for managed tables and is turned off when "auto.purge" property is unset or set to false. For more detail HIVE-15880