Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Update and Delete are not working in Hive ?

Re: Update and Delete are not working in Hive ?

Champion

@syamsri 

Ok Could you please let ,me know the file format that you are using for Hive ( testTableNew ) ,

 Hive supports Delete Update only on ORC format starting from 0.14 . 

 

Try creating a table with ORC format , if you want more flexibility then try Apache KUDU but it has it owns merits and demerits . Hope this helps . 

 

 

CREATE TABLE Sample (
  id                int,
  name              string
)
CLUSTERED BY (id) INTO 2 BUCKETS STORED AS ORC
TBLPROPERTIES ("transactional"="true",
  "compactor.mapreduce.map.memory.mb"="2048",    
  "compactorthreshold.hive.compactor.delta.num.threshold"="4",  
  "compactorthreshold.hive.compactor.delta.pct.threshold"="0.5"
);

Re: Update and Delete are not working in Hive ?

Contributor

Thanks for the reply.

 

I created the ORC format table only.

You can see the details in first post.

 

Apache Kudu is like hive ?

 

Thanks,

Syam.

Re: Update and Delete are not working in Hive ?

Champion

@syamsri

 

Apache Kudu is not like hive. It is like HDFS. The difference is HDFS stores data in row wise where as Kudo stores in column wise

Re: Update and Delete are not working in Hive ?

Contributor

Kudu is  like HBase..

Re: Update and Delete are not working in Hive ?

Champion

@syamsri Since you are using Cloudera manager - are you using safety valve to add those properties that needs to go in HS2  or did you manual edited the hive-site.xml ?  because it looks like your default session configuration is what being used and its not picking it up those transcation properties . 

Re: Update and Delete are not working in Hive ?

Contributor

Please check the hive-site.xml file and Guide me.

 

<configuration>
<property>
<name>hive.metastore.uris</name>
<value>thrift://quickstart.cloudera:9083</value>
</property>
<property>
<name>hive.metastore.client.socket.timeout</name>
<value>300</value>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
</property>
<property>
<name>hive.warehouse.subdir.inherit.perms</name>
<value>true</value>
</property>
<property>
<name>hive.auto.convert.join</name>
<value>true</value>
</property>
<property>
<name>hive.auto.convert.join.noconditionaltask.size</name>
<value>20971520</value>
</property>
<property>
<name>hive.optimize.bucketmapjoin.sortedmerge</name>
<value>false</value>
</property>
<property>
<name>hive.smbjoin.cache.rows</name>
<value>10000</value>
</property>
<property>
<name>hive.server2.logging.operation.enabled</name>
<value>true</value>
</property>
<property>

<property>
<name>hive.server2.logging.operation.log.location</name>
<value>/var/log/hive/operation_logs</value>
</property>
<property>
<name>mapred.reduce.tasks</name>
<value>-1</value>
</property>
<property>
<name>hive.exec.reducers.bytes.per.reducer</name>
<value>67108864</value>
</property>
<property>
<name>hive.exec.copyfile.maxsize</name>
<value>33554432</value>
</property>
<property>
<name>hive.exec.reducers.max</name>
<value>1099</value>
</property>
<property>
<name>hive.vectorized.groupby.checkinterval</name>
<value>4096</value>
</property>
<property>
<name>hive.vectorized.groupby.flush.percent</name>
<value>0.1</value>
</property>
<property>
<name>hive.compute.query.using.stats</name>
<value>false</value>
</property>
<property>
<name>hive.vectorized.execution.enabled</name>
<value>true</value>
</property>
<property>

<property>
<name>hive.vectorized.execution.reduce.enabled</name>
<value>false</value>
</property>
<property>
<name>hive.merge.mapfiles</name>
<value>true</value>
</property>
<property>
<name>hive.merge.mapredfiles</name>
<value>false</value>
</property>
<property>
<name>hive.cbo.enable</name>
<value>false</value>
</property>
<property>
<name>hive.fetch.task.conversion</name>
<value>minimal</value>
</property>
<property>
<name>hive.fetch.task.conversion.threshold</name>
<value>268435456</value>
</property>
<property>
<name>hive.limit.pushdown.memory.usage</name>
<value>0.1</value>
</property>
<property>
<name>hive.merge.sparkfiles</name>
<value>true</value>
</property>
<property>
<name>hive.merge.smallfiles.avgsize</name>
<value>16777216</value>
</property>
<property>
<name>hive.merge.size.per.task</name>
<value>268435456</value>
</property>

<property>
<name>hive.optimize.reducededuplication</name>
<value>true</value>
</property>
<property>
<name>hive.optimize.reducededuplication.min.reducer</name>
<value>4</value>
</property>
<property>
<name>hive.map.aggr</name>
<value>true</value>
</property>
<property>
<name>hive.map.aggr.hash.percentmemory</name>
<value>0.5</value>
</property>
<property>
<name>hive.optimize.sort.dynamic.partition</name>
<value>false</value>
</property>
<property>
<name>hive.execution.engine</name>
<value>mr</value>
</property>
<property>
<name>spark.executor.memory</name>
<value>52428800</value>
</property>
<property>
<name>spark.driver.memory</name>
<value>52428800</value>
</property>
<property>
<name>spark.executor.cores</name>
<value>1</value>
</property>
<property>

<property>
<name>spark.yarn.driver.memoryOverhead</name>
<value>64</value>
</property>
<property>
<name>spark.yarn.executor.memoryOverhead</name>
<value>64</value>
</property>
<property>
<name>spark.dynamicAllocation.enabled</name>
<value>true</value>
</property>
<property>
<name>spark.dynamicAllocation.initialExecutors</name>
<value>1</value>
</property>
<property>
<name>spark.dynamicAllocation.minExecutors</name>
<value>1</value>
</property>
<property>
<name>spark.dynamicAllocation.maxExecutors</name>
<value>2147483647</value>
</property>
<property>
<name>hive.metastore.execute.setugi</name>
<value>true</value>
</property>
<property>
<name>hive.support.concurrency</name>
<value>true</value>
</property>
<property>
<name>hive.zookeeper.quorum</name>
<value>quickstart.cloudera</value>
</property>
<property>

<property>
<name>hive.zookeeper.client.port</name>
<value>2181</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>quickstart.cloudera</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hive.zookeeper.namespace</name>
<value>hive_zookeeper_namespace_hive</value>
</property>
<property>
<name>hive.cluster.delegation.token.store.class</name>
<value>org.apache.hadoop.hive.thrift.MemoryTokenStore</value>
</property>
<property>
<name>hive.server2.enable.doAs</name>
<value>true</value>
</property>
<property>
<name>hive.server2.use.SSL</name>
<value>false</value>
</property>
<property>
<name>spark.shuffle.service.enabled</name>
<value>true</value>
</property>
</configuration>

 

Thanks,

Syam.

Re: Update and Delete are not working in Hive ?

New Contributor
Did you get any solution to the above issue.I got the similar issue while update/delete.
Highlighted

Re: Update and Delete are not working in Hive ?

Champion

@Suribharu

what file format are you using in hive ? 

what version of hive ? 

could you share me the delete query

Re: Update and Delete are not working in Hive ?

Contributor

Sorry for the late response. Nope i could not find the exact solution for those error. However i did followed all the steps mentioned on this post but that did not work. As a result i uninstalled hive and re-installed some other hive version which works for me. I spend many days for this issue to find the exact solution but could not find it out.

Re: Update and Delete are not working in Hive ?

Explorer

@UjjwalRana

 

What version of Hive did you install? Did you manage to solve the issue?

 

Thanks!