Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Hive on Spark has failed

Highlighted

Hive on Spark has failed

Hi Team,

I am trying to run hive queries on spark engine but it ends with below error. Can someone fix this?

 

set hive.execution.engine=spark;

 

select count(*) from ips_project_event;
INFO : Compiling command(queryId=hive_20200210061856_86055f2f-20b9-4fdd-9324-907924fa2df5): select count(*) from ips_project_event
INFO : Semantic Analysis Completed (retrial = false)
INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:_c0, type:bigint, comment:null)], properties:null)
INFO : Completed compiling command(queryId=hive_20200210061856_86055f2f-20b9-4fdd-9324-907924fa2df5); Time taken: 1.499 seconds
INFO : Executing command(queryId=hive_20200210061856_86055f2f-20b9-4fdd-9324-907924fa2df5): select count(*) from ips_project_event
INFO : Query ID = hive_20200210061856_86055f2f-20b9-4fdd-9324-907924fa2df5
INFO : Total jobs = 1
INFO : Launching Job 1 out of 1
INFO : Starting task [Stage-1:MAPRED] in serial mode
ERROR : FAILED: Execution Error, return code 30041 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client for Spark session d944d094-547b-44a5-a1bf-77b9a3952fe2
INFO : Completed executing command(queryId=hive_20200210061856_86055f2f-20b9-4fdd-9324-907924fa2df5); Time taken: 1.594 seconds
Error: Error while processing statement: FAILED: Execution Error, return code 30041 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client for Spark session d944d094-547b-44a5-a1bf-77b9a3952fe2 (state=42000,code=30041)

1 REPLY 1

Re: Hive on Spark has failed

Mentor

@saivenkatg55 

 

That could be a memory issue on your cluster. Can you share the below config

set spark.executor.memory
set yarn.nodemanager.resource.memory-mb
set yarn.scheduler.maximum-allocation-mb

Here are some links to help How to calculate node and executors memory in Apache Spark

after adjusting that share the new output

 

Don't have an account?
Coming from Hortonworks? Activate your account here