Member since
08-18-2017
146
Posts
19
Kudos Received
17
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 5019 | 05-09-2024 02:50 PM | |
| 10055 | 09-13-2022 10:50 AM | |
| 4054 | 07-25-2022 12:18 AM | |
| 5526 | 06-24-2019 01:56 PM |
04-28-2026
12:30 AM
Deleting rows in an HDFS-backed table does not immediately reduce file size because HDFS is immutable by design and individual records cannot be removed in place. In Hive ACID tables, a DELETE does not touch the base data files at all. Instead, it writes a separate delete delta file that marks rows as logically deleted using row ID references. The physical file size on HDFS stays the same or increases because new delta files are being added. Actual size reduction only happens after a major compaction runs, which rewrites the base files by merging all deltas and physically excluding deleted rows, followed by the HDFS cleaner removing the old files. In Apache Iceberg, deletes produce position or equality delete files written alongside existing data files, again increasing HDFS usage until a rewrite data files compaction purges the old data. In Apache Hudi Copy-On-Write, a DELETE rewrites the entire affected file immediately so size does reduce, but with heavy write amplification. In Merge-On-Read, deletes are appended as log files and compaction is still required for physical reclamation. The bottom line is that DELETE is always append-driven at the HDFS storage layer regardless of table format, and true physical space reclamation requires compaction to run and obsolete files to be purged.
... View more
05-10-2024
10:39 AM
1 Kudo
Thanks! because i have null values in my data set as well, i used colaesce, and it worked! Your query was the basis though, so thanks again! Query by @nramanaiah that worked for me, as I have null records in the dataset: select Currency, (coalesce(spend_a,0)) + (colaesce(spend_b,0)) + coalesce(spend_c,0)) + coalesce(spend_d,0)) as total_spend from test
... View more
04-23-2024
06:59 AM
If its a tez application, AM logs will show how much memory is currently allocated/consumed by the application & how much free resources available in the queue at that specific time. eg., 2024-04-22 23:27:20,636 [INFO] [AMRM Callback Handler Thread] |rm.YarnTaskSchedulerService|: Allocated: <memory:843776, vCores:206> Free: <memory:2048, vCores:306> pendingRequests: 0 delayedContainers: 205 heartbeats: 101 lastPreemptionHeartbeat: 100 2024-04-22 23:27:30,660 [INFO] [AMRM Callback Handler Thread] |rm.YarnTaskSchedulerService|: Allocated: <memory:155648, vCores:38> Free: <memory:495616, vCores:356> pendingRequests: 0 delayedContainers: 38 heartbeats: 151 lastPreemptionHeartbeat: 150 This allocation details will be logged frequently in Tez AM logs.
... View more
10-03-2022
07:16 AM
@nramanaiah have been able to run further testing and confirm that my partitions are purging as expected! thanks again for the assistance!
... View more
08-16-2022
11:02 PM
@ho_ddeok, Has any of the replies helped resolve your issue? If so, can you please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future?
... View more
08-01-2022
10:41 PM
@Hafiz Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
07-29-2022
01:11 AM
Not Yet
... View more
06-24-2019
01:56 PM
1 Kudo
This WARN logs should not cause any issue, 1) however if you want to remove this configs, you can use below syntax to delete the config on specific component config_type(Configs can be searched in ambari -> hive configs filter box to know which file to be updated) /var/lib/ambari-server/resources/scripts/configs.py -u <<username>> -p <<password>> -n <<clustername>> -l <<ambari-server-host>> -t <<ambari-server-port>> -a <<action>> -c <<config_type>> -k <<config-key>> eg., /var/lib/ambari-server/resources/scripts/configs.py -u admin -p <<dummy>> -n cluster1 -l ambari-server-host -t 8080 -a delete -c hive-site -k hive.mapred.strict /var/lib/ambari-server/resources/scripts/configs.py -u admin -p <<dummy>> -n cluster1 -l ambari-server-host -t 8080 -a delete -c hive-site -k hive.mapred.supports.subdirectories This is the reference for configs.py https://cwiki.apache.org/confluence/display/AMBARI/Modify+configurations#Modifyconfigurations-Editconfigurationusingconfigs.py 2) To remove log4j warning, goto ambari -> hive configs -> advance hive-log4j, comment below line log4j.appender.DRFA.MaxFileSize After the above 2 changes, restart hive services, all those 3 warns should go away. If this article helps to resolve the issue, accept the answer, it might also help others members in the community.
... View more