Created 04-15-2024 06:43 AM
Hello.
Before deleting rows in a specific table (463,462 rows in table), HDFS file size is :
$ hadoop fs -du -s -h /apps/hive/warehouse/prd_thmil.db/th_mil_fb_code_value_brut
54.2 M 162.6 M /apps/hive/warehouse/prd_thmil.db/th_mil_fb_code_value_brut
54.2 Mb is the size of 1 file and each file is replicated 2 times so 162.6 Mb is the total size, it OK.
But after deleting more than 450,000 rows in the table (12,890 rows remaining after the DELETE), the file size didn't change at all.
Is it normal ? When new rows are added in the table, file size won't grow and HDFS will 'overwrite' older data with the new one ?
Regards
Created 04-15-2024 01:08 PM
Deleting rows in Hive is like hiding books in a library:
File size stays the same: HDFS doesn't erase data, just marks it hidden.
New data fills "deleted" space: New info goes on those hidden shelves first.
No immediate shrink: Resizing files is slow, so HDFS waits.
That's why the file size didn't change. It's normal HDFS behavior!
Created 04-15-2024 11:05 AM
Existing files won't be rewritten by delete query, instead deleted rows ROW__ID will be written in new delete_delta folder. Read queries will apply deleted ROW__ID on existing files to exclude the rows.
Triggering Major compaction on the table will rewrite new files merging delta & delete_delta folder.
Created 04-15-2024 01:08 PM
Deleting rows in Hive is like hiding books in a library:
File size stays the same: HDFS doesn't erase data, just marks it hidden.
New data fills "deleted" space: New info goes on those hidden shelves first.
No immediate shrink: Resizing files is slow, so HDFS waits.
That's why the file size didn't change. It's normal HDFS behavior!
Created 04-23-2024 01:10 AM
Thank you, guys, for your answers.
Created on
03-09-2026
11:04 PM
- last edited on
03-10-2026
05:07 AM
by
cjervis
Yes, this is normal behavior in Hive. When you delete rows, the underlying HDFS files usually don't shrink automatically because HDFS doesn't modify files in place. You typically need to run compaction or rewrite the table (like using INSERT OVERWRITE) to reclaim the space.
Created 04-28-2026 12:30 AM
Deleting rows in an HDFS-backed table does not immediately reduce file size because HDFS is immutable by design and individual records cannot be removed in place.
In Hive ACID tables, a DELETE does not touch the base data files at all. Instead, it writes a separate delete delta file that marks rows as logically deleted using row ID references. The physical file size on HDFS stays the same or increases because new delta files are being added. Actual size reduction only happens after a major compaction runs, which rewrites the base files by merging all deltas and physically excluding deleted rows, followed by the HDFS cleaner removing the old files.
In Apache Iceberg, deletes produce position or equality delete files written alongside existing data files, again increasing HDFS usage until a rewrite data files compaction purges the old data. In Apache Hudi Copy-On-Write, a DELETE rewrites the entire affected file immediately so size does reduce, but with heavy write amplification. In Merge-On-Read, deletes are appended as log files and compaction is still required for physical reclamation.
The bottom line is that DELETE is always append-driven at the HDFS storage layer regardless of table format, and true physical space reclamation requires compaction to run and obsolete files to be purged.