Support Questions

Find answers, ask questions, and share your expertise

Error while executing hive query on Spark as execution engine

avatar
New Contributor

I am using HDP 2.3.2

I used following commands to set spark as hive's execution engine

set hive.execution.engine=spark;

and executed query as

select count(*) from tablename;

I got following error,

java.lang.NoSuchMethodError: com.fasterxml.jackson.module.scala.deser.BigDecimalDeserializer$.handledType()Ljava/lang/Class;
        at com.fasterxml.jackson.module.scala.deser.NumberDeserializers$.<init>(ScalaNumberDeserializersModule.scala:49)
        at com.fasterxml.jackson.module.scala.deser.NumberDeserializers$.<clinit>(ScalaNumberDeserializersModule.scala)
        at com.fasterxml.jackson.module.scala.deser.ScalaNumberDeserializersModule$class.$init$(ScalaNumberDeserializersModule.scala:61)
        at com.fasterxml.jackson.module.scala.DefaultScalaModule.<init>(DefaultScalaModule.scala:19)
        at com.fasterxml.jackson.module.scala.DefaultScalaModule$.<init>(DefaultScalaModule.scala:35)
        at com.fasterxml.jackson.module.scala.DefaultScalaModule$.<clinit>(DefaultScalaModule.scala)
        at org.apache.spark.rdd.RDDOperationScope$.<init>(RDDOperationScope.scala:78)
        at org.apache.spark.rdd.RDDOperationScope$.<clinit>(RDDOperationScope.scala)
        at org.apache.spark.SparkContext.withScope(SparkContext.scala:681)
        at org.apache.spark.SparkContext.hadoopRDD(SparkContext.scala:956)
        at org.apache.spark.api.java.JavaSparkContext.hadoopRDD(JavaSparkContext.scala:428)
        at org.apache.hadoop.hive.ql.exec.spark.SparkPlanGenerator.generateMapInput(SparkPlanGenerator.java:188)
        at org.apache.hadoop.hive.ql.exec.spark.SparkPlanGenerator.generateParentTran(SparkPlanGenerator.java:134)
        at org.apache.hadoop.hive.ql.exec.spark.SparkPlanGenerator.generate(SparkPlanGenerator.java:106)
        at org.apache.hadoop.hive.ql.exec.spark.LocalHiveSparkClient.execute(LocalHiveSparkClient.java:130)
        at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.submit(SparkSessionImpl.java:64)
        at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:107)
        at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
        at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:89)
        at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1655)
        at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1414)
        at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1195)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
        at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
        at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
        at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
        at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
FAILED: Execution Error, return code -101 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. com.fasterxml.jackson.module.scala.deser.BigDecimalDeserializer$.handledType()Ljava/lang/Class;

Please suggest some fixes. thanks

1 ACCEPTED SOLUTION

avatar
Master Mentor

@sangeeta rawat

It's not supported yet. You can leverage sparksql to access hive

View solution in original post

9 REPLIES 9

avatar
Master Mentor

@sangeeta rawat can you elaborate the need to use spark as hive exec engine? Hortonworks does not support this execution model. Where does Tez fall short for you that you think Spark can help?

avatar
New Contributor

Thanks @Artem Ervits for quick response. 🙂

avatar
Master Mentor

@sangeeta rawat

It's not supported yet. You can leverage sparksql to access hive

avatar
Master Mentor

avatar
New Contributor

Thanks @Neeraj Sabharwal for quick response 🙂

avatar
Master Mentor

@Harshal Joshi

Please accept the best answer to close the thread and good to have you here

avatar
New Contributor

@Artem Ervits Thanks for the prompt reply, Its not about Tez has fallen short, I was just analysing the difference between Different execution engines for Hive.

avatar
New Contributor

@Neeraj Sabharwal Thanks for the prompt reply

avatar
Explorer

I am facing the same issue

et hive.execution.engine=spark;

and executed query as

select count(*) from tablename;

getting same error ,

but select * from table name works fine

so what is the best solution to run

select count(*) from tablename; in hive shell