Support Questions

Find answers, ask questions, and share your expertise

Cannot Disable Tez with Hive on HDP3.0

avatar
Contributor

Hello,

One normally disables Tez with Hive using:

SET hive.execution.engine=mr;

But when I use this option in the Hive shell I get:

0: jdbc:hive2://my_server:2181,> SET hive.execution.engine = mr;
Error: Error while processing statement: hive execution engine mr is not supported. (state=42000,code=1)

What's going on? Tez is not working for me and I want to try with MR.

I'm using HDP 3.0

1 ACCEPTED SOLUTION

avatar
@Daniel Zafar

Apache Tez replaces MapReduce as the default Hive execution engine in HDP 3.0. MapReduce is no longer supported.

You may want to check what is the actual issue with Tez and fix it.

Ref: https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/hive-overview/content/hive-apache-hive-3-ar...

View solution in original post

4 REPLIES 4

avatar
@Daniel Zafar

Apache Tez replaces MapReduce as the default Hive execution engine in HDP 3.0. MapReduce is no longer supported.

You may want to check what is the actual issue with Tez and fix it.

Ref: https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/hive-overview/content/hive-apache-hive-3-ar...

avatar

@Daniel Zafar Did this help in addressing your query? if yes, please confider accepting the answer and mark this thread as closed.

avatar
Explorer

I faced the same issue. Increased the tez memory to 256. The job has been launched after that. But it got stuck at the next step. Not sure process is running or not. I could not find any active query in hiveserver2 UI

 

0: jdbc:hive2://rudu-cldmst001.rush.edu:2181,> select count(*) from ctakes_annotations_docs_2008;
INFO : Compiling command(queryId=hive_20220531043803_772f756b-c226-40a9-b0f4-186d05ce44b5): select count(*) from ctakes_annotations_docs_2008
INFO : Semantic Analysis Completed (retrial = false)
INFO : Created Hive schema: Schema(fieldSchemas:[FieldSchema(name:_c0, type:bigint, comment:null)], properties:null)
INFO : Completed compiling command(queryId=hive_20220531043803_772f756b-c226-40a9-b0f4-186d05ce44b5); Time taken: 6.933 seconds
INFO : Executing command(queryId=hive_20220531043803_772f756b-c226-40a9-b0f4-186d05ce44b5): select count(*) from ctakes_annotations_docs_2008
WARN : Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
INFO : Query ID = hive_20220531043803_772f756b-c226-40a9-b0f4-186d05ce44b5
INFO : Total jobs = 1
INFO : Launching Job 1 out of 1
INFO : Starting task [Stage-1:MAPRED] in serial mode
INFO : Number of reduce tasks determined at compile time: 1
INFO : In order to change the average load for a reducer (in bytes):
INFO : set hive.exec.reducers.bytes.per.reducer=<number>
INFO : In order to limit the maximum number of reducers:
INFO : set hive.exec.reducers.max=<number>
INFO : In order to set a constant number of reducers:
INFO : set mapreduce.job.reduces=<number>
INFO : number of splits:204
INFO : Submitting tokens for job: job_1638892027289_0036
INFO : Executing with tokens: [Kind: HDFS_DELEGATION_TOKEN, Service: 10.21.24.36:8020, Ident: (token for hive: HDFS_DELEGATION_TOKEN owner=newer=yarn, realUser=, issueDate=1653989890904, maxDate=1654594690904, sequenceNumber=2652, masterKeyId=1077), Kind: kms-dt, Service: , Ident: (kms-dt owner=hive, renewer=yarn, realUser=, issueDate=1653989890936, maxDate=1654594690936, sequenceNumber=3843, masterKeyId=2078)]

avatar
Community Manager

@dpugazhe as this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.



Regards,

Vidya Sargur,
Community Manager


Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.
Learn more about the Cloudera Community: