Support Questions

Find answers, ask questions, and share your expertise

Hive Insert Into S3 External Table Overwrites Whole File !

Contributor

All,

We have been facing an issue while we are trying to insert data from Managed to External Table S3. Each time when we have a new data in Managed Table, we need to append that new data into our external table S3. Instead of appending, it is replacing old data with newly received data (Old data are over written). I have come across similar JIRA thread and that patch is for Apache Hive (Link at the bottom). Since we are on HDP, can anyone help me out with this?

Below are the Versions:

HDP 2.5.3
Hive 1.2.1000.2.5.3.0-37
create external table tests3prd (c1 string, c2 string) location 's3a://testdata/test/tests3prd/'; 

create table testint (c1 string,c2 string); 

insert into testint values (1,2);

14671-pic1.png

insert into tests3prd select * from testint; (2 times)

14672-pic2.png

When I re-insert the same values 1,2 , it overwrites the existing row and replaces with the new record.

Here is the S3 external files where each time *0000_0 is overwritten instead a new copy or serial addition.

14675-pic3.png

PS: Jira Thread : https://issues.apache.org/jira/browse/HIVE-15199

10 REPLIES 10

Contributor

@vpoornalingam Will you be able to check this out for me?

Contributor

This is fixed in Hortonworks Cloud. Is this on-prime cluster or Hortonworks Cloud?.

Contributor

Thanks Rajesh for the reply, this is on AWS EC2 instance installed with HDP 2.5.3 ! Let me know if you know any workaround for the same !

Cheers,

Ram

Expert Contributor

You can try below workaround.

-Create merged temp table (old data + new data) using union all

-Insert overwrite the final table with merged data

-Drop temp table .

Expert Contributor

@Rajesh Balamohan We are also facing the similar issue. Please let us know if there are any fixes available or any plan for fix in future releases for HDP.

HDP 2.5.3 (EC2 Instances), Hive 1.2.1

Thanks..

Super Guru

It should be fixed in the current release of HDP 2.6. If not, please put in an official support request or JIRA ticket.

Explorer

Hello guys, I have a similar issue but with external table onto HDFS. Is there any solution on this so far? we are using HDP-2.6.3.0 and here is how my table looks like:

create external table test1(c1 int, c2 int)  CLUSTERED BY(c1) SORTED BY(c1) INTO 4 BUCKETS ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE LOCATION '/user/companies/';

Currently the data is always overwritten

Thank you

Super Guru

Explorer

Hello guys, any update on this? I remind you that in my case it occurs on HDFS not S3.

Thank you in advance

Explorer

In case that someone will face the same problem we solved this by making the table internal, keeping the TextFile format and storing data under default Hive directory. The definition of table look like this at the moment:

create table test1(c1 int, c2 int)  CLUSTERED BY(c1) SORTED BY(c1) INTO 4 BUCKETS ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE;