Support Questions

Find answers, ask questions, and share your expertise

hive overwrtite table

avatar
Master Collaborator

hi: i am receiving tihs error with this script:

CREATE EXTERNAL TABLE IF NOT EXISTS  mi_cliente_fmes(
id_interno_pe bigint,
cod_nrbe_en int,
mi_nom_cliente string,
fec_ncto_const_pe string,
fecha_prim_rl_cl string ,
sexo_in string,
cod_est_civil_indv string,
cod_est_lab_indv string,
num_hijos_in int,
ind_autnmo_in string,
cod_ofcna_corr string,
cod_cpcdad_lgl_in int
)
CLUSTERED BY (cod_nrbe_en) INTO 60 BUCKETS
stored as ORC
LOCATION '/RSI/tables/desercion/mi_cliente_fmes'


set hive.enforce.bucketing = true;
set map.reduce.tasks = 25;
SET hive.exec.parallel=true;
SET hive.vectorized.execution.enabled=true;
INSERT OVERWRITE TABLE mi_cliente_fmes
select id_interno_pe,
       cod_nrbe_en,
       mi_nom_cliente,
       fec_ncto_const_pe,
       fecha_prim_rl_cl,
       sexo_in,
       cod_est_civil_indv,
       cod_est_lab_indv,
       num_hijos_in,
       ind_autnmo_in,
       cod_ofcna_corr,
       cod_cpcdad_lgl_in
 FROM mi_cliente_fmes_temp;

and the error:

Error: Error while compiling statement: FAILED: SemanticException [Error 10295]: INSERT OVERWRITE not allowed on table with OutputFormat that implements AcidOutputFormat while transaction manager that supports ACID is in use (state=42000,code=10295)
1 ACCEPTED SOLUTION

avatar
Super Guru

I believe this is because you have designated your Hive table as EXTERNAL. External tables are most often used to manage data directly on HDFS that is loaded as CSV files, etc. In your case, you are creating a table and storing as ORC while populating it via a SELECT clause.

Is there a reason you need it to be external? If not, you can omit the "EXTERNAL" in your "CREATE TABLE" clause and remove the "Location" entry. The only real change for you is the data will be stored under /apps/hive/warehouse on HDFS instead of the location you specified.

This is what you would have:

CREATE TABLE IF NOT EXISTS  mi_cliente_fmes(
id_interno_pe bigint,
cod_nrbe_en int,
mi_nom_cliente string,
fec_ncto_const_pe string,
fecha_prim_rl_cl string ,
sexo_in string,
cod_est_civil_indv string,
cod_est_lab_indv string,
num_hijos_in int,
ind_autnmo_in string,
cod_ofcna_corr string,
cod_cpcdad_lgl_in int
)
CLUSTERED BY (cod_nrbe_en) INTO 60 BUCKETS
stored as ORC

View solution in original post

7 REPLIES 7

avatar
Super Guru

You are using ORC format, which supports ACID and transactions. Try using "INSERT INTO" instead of "INSERT OVERWRITE" and adding a "TRUNCATE TABLE" before running the select.

https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML

As of Hive 0.14, if a table has an OutputFormat that implements AcidOutputFormat and the system is configured to use a transaction manager that implements ACID, then INSERT OVERWRITE will be disabled for that table.  This is to avoid users unintentionally overwriting transaction history.  The same functionality can be achieved by using TRUNCATE TABLE (for non-partitioned tables) or DROP PARTITION followed by INSERT INTO.

avatar
New Contributor

@myoung can you please give the syntax to write this sort of query

avatar
Master Collaborator

Hi:

Now i got this error tu type truncate table

Error: Error while compiling statement: FAILED: SemanticException [Error 10146]: Cannot truncate non-managed table mi_cliente_fmes. (state=42000,code=10146)

avatar
Contributor

This error happens when you're trying to truncate an external table. Truncate needs to target a native/managed table or an exception will be thrown.

Here's a really great reference:

https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL

avatar
Super Guru

I believe this is because you have designated your Hive table as EXTERNAL. External tables are most often used to manage data directly on HDFS that is loaded as CSV files, etc. In your case, you are creating a table and storing as ORC while populating it via a SELECT clause.

Is there a reason you need it to be external? If not, you can omit the "EXTERNAL" in your "CREATE TABLE" clause and remove the "Location" entry. The only real change for you is the data will be stored under /apps/hive/warehouse on HDFS instead of the location you specified.

This is what you would have:

CREATE TABLE IF NOT EXISTS  mi_cliente_fmes(
id_interno_pe bigint,
cod_nrbe_en int,
mi_nom_cliente string,
fec_ncto_const_pe string,
fecha_prim_rl_cl string ,
sexo_in string,
cod_est_civil_indv string,
cod_est_lab_indv string,
num_hijos_in int,
ind_autnmo_in string,
cod_ofcna_corr string,
cod_cpcdad_lgl_in int
)
CLUSTERED BY (cod_nrbe_en) INTO 60 BUCKETS
stored as ORC

avatar
Master Collaborator

Hi:

after create the like NO INTERNAL and execute:

set hive.enforce.bucketing = true;
set map.reduce.tasks = 60;
SET hive.exec.parallel=true;
SET hive.vectorized.execution.enabled=true;
TRUNCATE TABLE desercion_clientes_temp3;
INSERT INTO TABLE desercion_clientes_temp3
select * FROM desercion_clientes_temp2;

it work!!

Many thanks

avatar
Super Guru

Good to hear!