Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

hive insert into table from query

avatar
Contributor

I am doing a multistep loading process to get data into a transactional table. I have the two table defintions shown at the bottom of this questio. When I execute "INSERT INTO TABLE stgicplogs PARTITION (actdate='2016-03-28') select * from stgicplogs_stg;" it appears to go well. There are no errors. However, there is also no data in stgicplogs. Data does exist in stgicplogs_stg and returns when selected nicely.

Any idea what I might be missing?

I do have hive.exec.dynamic.partition.mode set to nonstrict and hive.enforce.bucketing set to true.

The result after running the insert statement in beeline is:

INFO  : Session is already open
INFO  : Dag name: INSERT INTO TABLE stgicplog...stgicplogs_stg(Stage-1)
INFO  : 


INFO  : Status: Running (Executing on YARN cluster with App id application_1478118527771_0011)


INFO  : Map 1: 0/1	Reducer 2: 0/20	
INFO  : Map 1: 0/1	Reducer 2: 0/20	
INFO  : Map 1: 0(+1)/1	Reducer 2: 0/20	
INFO  : Map 1: 0(+1)/1	Reducer 2: 0/20	
INFO  : Map 1: 1/1	Reducer 2: 0/20	
INFO  : Map 1: 1/1	Reducer 2: 0(+1)/20	
INFO  : Map 1: 1/1	Reducer 2: 1(+0)/20	
INFO  : Map 1: 1/1	Reducer 2: 1(+1)/20	
INFO  : Map 1: 1/1	Reducer 2: 2(+0)/20	
INFO  : Map 1: 1/1	Reducer 2: 2(+1)/20	
INFO  : Map 1: 1/1	Reducer 2: 3(+1)/20	
INFO  : Map 1: 1/1	Reducer 2: 4(+0)/20	
INFO  : Map 1: 1/1	Reducer 2: 4(+1)/20	
INFO  : Map 1: 1/1	Reducer 2: 5(+1)/20	
INFO  : Map 1: 1/1	Reducer 2: 6(+1)/20	
INFO  : Map 1: 1/1	Reducer 2: 7(+0)/20	
INFO  : Map 1: 1/1	Reducer 2: 7(+1)/20	
INFO  : Map 1: 1/1	Reducer 2: 8(+0)/20	
INFO  : Map 1: 1/1	Reducer 2: 8(+1)/20	
INFO  : Map 1: 1/1	Reducer 2: 9(+0)/20	
INFO  : Map 1: 1/1	Reducer 2: 9(+1)/20	
INFO  : Map 1: 1/1	Reducer 2: 9(+2)/20	
INFO  : Map 1: 1/1	Reducer 2: 10(+2)/20	
INFO  : Map 1: 1/1	Reducer 2: 11(+2)/20	
INFO  : Map 1: 1/1	Reducer 2: 12(+1)/20	
INFO  : Map 1: 1/1	Reducer 2: 12(+2)/20	
INFO  : Map 1: 1/1	Reducer 2: 12(+3)/20	
INFO  : Map 1: 1/1	Reducer 2: 13(+3)/20	
INFO  : Map 1: 1/1	Reducer 2: 14(+3)/20	
INFO  : Map 1: 1/1	Reducer 2: 14(+5)/20	
INFO  : Map 1: 1/1	Reducer 2: 15(+4)/20	
INFO  : Map 1: 1/1	Reducer 2: 15(+5)/20	
INFO  : Map 1: 1/1	Reducer 2: 16(+4)/20	
INFO  : Map 1: 1/1	Reducer 2: 17(+3)/20	
INFO  : Map 1: 1/1	Reducer 2: 18(+2)/20	
INFO  : Map 1: 1/1	Reducer 2: 19(+1)/20	
INFO  : Map 1: 1/1	Reducer 2: 20/20	
INFO  : Loading data to table default.stgicplogs partition (actdate=2016-03-28) from hdfs://localhost:8020/apps/hive/warehouse/stgicplogs/actdate=2016-03-28/.hive-staging_hive_2016-11-03_13-54-35_964_9082514218818653049-1/-ext-10000
INFO  : Partition default.stgicplogs{actdate=2016-03-28} stats: [numFiles=5, numRows=0, totalSize=8626, rawDataSize=0]
No rows affected (31.42 seconds)


create table stgicplogs (actdatetime timestamp,server VARCHAR(10),pid VARCHAR(25),level VARCHAR(50),type VARCHAR(50),details VARCHAR(8000)) PARTITIONED BY(actdate DATE) CLUSTERED BY(server) INTO 20 BUCKETS STORED AS orc TBLPROPERTIES("transactional"="true");

create table stgicplogs_stg (actdatetime timestamp,server VARCHAR(10),pid VARCHAR(25),level VARCHAR(50),type VARCHAR(50),details VARCHAR(8000), actdate DATE) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde' WITH SERDEPROPERTIES ("separatorChar" = "|","quoteChar"     = "'","escapeChar"    = "\\");
1 ACCEPTED SOLUTION

avatar
Contributor

Okay, I found a workaround. Since I don't really need this table to be transactional, it was just a nice to have, I have created the table without buckets and without TBLPROPERTIES and now it works as expected.

create table stgicplogs (actdatetime timestamp,server VARCHAR(10),pid VARCHAR(25),level VARCHAR(50),type VARCHAR(50),details VARCHAR(8000)) PARTITIONED BY(actdate DATE) STORED AS orc;

View solution in original post

1 REPLY 1

avatar
Contributor

Okay, I found a workaround. Since I don't really need this table to be transactional, it was just a nice to have, I have created the table without buckets and without TBLPROPERTIES and now it works as expected.

create table stgicplogs (actdatetime timestamp,server VARCHAR(10),pid VARCHAR(25),level VARCHAR(50),type VARCHAR(50),details VARCHAR(8000)) PARTITIONED BY(actdate DATE) STORED AS orc;