Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Hive update , delete and insert ERROR in cdh 5.4.2

avatar
Expert Contributor

Hi ,

 

I am getting following errorhi in cdh 5.4.2 

 

FAILED: SemanticException [Error 10294]: Attempt to do update or delete using transaction manager that does not support these operations.

 

I followed the steps and limitations:

 

Following are my steps........

 

1. New Configuration Parameters for Transactions

2. Creates Hive table with ACID support

3. Load data into Hive table

4. Do UPDATE,DELETE and INSERT

 

set hive.support.concurrency=true;
set hive.enforce.bucketing=true;
set hive.exec.dynamic.partition.mode=nonstrict;
set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
set hive.compactor.initiator.on=true;
set hive.compactor.worker.threads=2;

 

 

CREATE TABLE abc1 (
empwork_key int,
empwork_id int,
empwork__name string,
empwork_email string,
emp_wrk_phone string,
CLUSTERED BY (empwork_id) into 2 buckets
STORED AS ORC TBLPROPERTIES ('transactional' = 'true');

 

-- the data is inserted from an external table which is textfile format.

 

INSERT INTO TABLE abc1 
SELECT
empwork_key ,
empwork_id ,
empwork_name,  
empwork_email ,
emp_wrk_phone ,
FROM test.abc1
;

 

update abc1 SET empwork_name = "Raj" where empwork_key = 70;

 

 

Please help if any suggestions or configuration changes needed.

 

I am setting all properties from hive shell

 

 

 

17 REPLIES 17

avatar
Mentor
Are you using the Hive CLI or Beeline+HS2 for this? Have you tried setting the properties into the configuration file instead, does that work?

The property appears to be set correctly, but the check is failing likely cause the default session configuration is checked for the transaction manager instance, and not the query configuration.

avatar
New Contributor

I am using Beeline+HS2

 

I configured the below properties at the shell level. But the script is failing with the same error:

 

set hive.auto.convert.join.noconditionaltask.size = 10000000;
set hive.support.concurrency = true;
set hive.enforce.bucketing = true;
set hive.exec.dynamic.partition.mode = nonstrict;
set hive.txn.manager = org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
set hive.compactor.initiator.on = true;
set hive.compactor.worker.threads = 1 ;

UPDATE update_test SET style_code="TEST" where style_code="xxxxx";

 

Error: Error while compiling statement: FAILED: SemanticException [Error 10294]: Attempt to do update or delete using transaction manager that does not support these operations. (state=42000,code=10294)

 

Any Help is appreciated...

avatar
Mentor
The transaction manager cannot be set on a per-query basis - it can be set at the HS2 config.

Note that we do not recommend use of the transaction manager features currently: http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cdh_rn_hive_ki.html

avatar

Hi,

 

I have also set the values as below, but I used the hive-site.xml Snippet in the Cloudera Manager (5.4.8). After I restarted the cluster I also checked the hive-site.xml in the directory "/run/cloudera-scm-agent/process/" and found the entries as well (so everything seems to be fine).

 

However, it is still not possible to delete table entries in Hive, I still get the error message (as mentioned above):


FAILED: SemanticException [Error 10294]: Attempt to do update or delete using transaction manager that does not support these operations.

 

hive.auto.convert.join.noconditionaltask.size = 10000000;
hive.support.concurrency = true;
hive.enforce.bucketing = true;
hive.exec.dynamic.partition.mode = nonstrict;
hive.txn.manager = org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
hive.compactor.initiator.on = true;
hive.compactor.worker.threads = 1 ;

 

Maybe I missed something but could you please help me?

 

Thanks a lot in advance,

Matthias

avatar
New Contributor

Does anyone know what are the risks if I set those values?

avatar
Explorer

I'm bringing this back from the dead.

 

We're getting an error when trying to delete via beeline;

 

INFO  : OK
+-------------+--+
| new123.foo  |
+-------------+--+
| foo         |
+-------------+--+
1 row selected (0.106 seconds)
0: jdbc:hive2://svqxbdcn6cdh57sparkn1:10000/d> delete from new123 where foo='foo';
Error: Error while compiling statement: FAILED: SemanticException [Error 10294]: Attempt to do update or delete using transaction manager that does not support these operations. (state=42000,code=10294)
0: jdbc:hive2://svqxbdcn6cdh57sparkn1:10000/d>

 

 

This is in CDH 5.7 in an unsecured configuration. I haven't added any of the configuration changes yet, but was looking to see if there was a definitive fix out there, or at least something that would help me understand what was happening.

 

avatar
Explorer

With the suggested changes above I get;

 

0: jdbc:hive2://svqxbdcn6cdh57sparkn1:10000/d> delete from new123 where foo='foo';
Error: Error while compiling statement: FAILED: SemanticException [Error 10297]: Attempt to do update or delete on table default.new123 that does not use an AcidOutputFormat or is not bucketed (state=42000,code=10297)

avatar
Explorer

I decided to try with a bucketed table and ended up with this error;

 

0: jdbc:hive2://svqxbdcn6cdh57sparkn1:10000/d> delete from floridacities where id='30';
Error: Error while compiling statement: FAILED: SemanticException [Error 10122]: Bucketized tables do not support INSERT INTO: Table: default.floridacities (state=42000,code=10122)

avatar
New Contributor

I was also facing same issue like you. Then I had followed these steps  and it worked for me :

 

set hive.support.concurrency=true;
set hive.enforce.bucketing=true;
set hive.exec.dynamic.partition.mode=nonstrict;
set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
set hive.compactor.initiator.on=true;
set hive.compactor.worker.threads=2;

 

Then I changed added hive.in.test property=true in the hive-site.xml file in /usr/lib/hive location.

 

After that I restarted the Hive from HUE and then ran the update command and it worked for me.