Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Unable to create hive context with spark2 jdbc channel

Unable to create hive context with spark2 jdbc channel

New Contributor
 
2 REPLIES 2

Re: Unable to create hive context with spark2 jdbc channel

@Naresh Meena

Can you share more details on the issue?

Re: Unable to create hive context with spark2 jdbc channel

New Contributor

Spark version (2.1.1) with Hive(2.6.1) will not work with Spark Jdbc Channel while inserting data

2017-11-27 19:38:25 Driver [ERROR] SparkBatchSubmitter - Failed to start the driver for Batch_JDBC_PipelineTest org.apache.spark.sql.AnalysisException: Hive support is required to insert into the following tables: default.naresh ;; 'InsertIntoTable 'SimpleCatalogRelation default, CatalogTable( Table: default.naresh Created: Mon Nov 27 19:38:25 IST 2017 Last Access: Thu Jan 01 05:29:59 IST 1970 Type: MANAGED Schema: [StructField(empID,LongType,true), StructField(empDate,DateType,true), StructField(empName,StringType,true), StructField(empSalary,DoubleType,true), StructField(empLocation,StringType,true), StructField(empConditions,BooleanType,true), StructField(empCity,StringType,true), StructField(empSystemIP,StringType,true)] Provider: hive Storage(Location: file:/hadoop/yarn/local/usercache/sax/appcache/application_1511627000183_0086/container_e34_1511627000183_0086_01_000001/spark-warehouse/naresh, InputFormat: org.apache.hadoop.mapred.TextInputFormat, OutputFormat: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat)), OverwriteOptions(false,Map()), false +- LogicalRDD [empID#49L, empDate#50, empName#51, empSalary#52, empLocation#53, empConditions#54, empCity#55, empSystemIP#56]

at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.failAnalysis(CheckAnalysis.scala:39)
at org.apache.spark.sql.catalyst.analysis.Analyzer.failAnalysis(Analyzer.scala:57)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:405)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:76)
at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:128)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:76)
at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:57)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:52)
at org.apache.spark.sql.execution.QueryExecution.withCachedData$lzycompute(QueryExecution.scala:73)
at org.apache.spark.sql.execution.QueryExecution.withCachedData(QueryExecution.scala:72)
at org.apache.spark.sql.execution.QueryExecution.optimizedPlan$lzycompute(QueryExecution.scala:78)
at org.apache.spark.sql.execution.QueryExecution.optimizedPlan(QueryExecution.scala:78)
at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:84)
at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:80)
at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:89)
at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:89)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:92)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:92)
at org.apache.spark.sql.DataFrameWriter.insertInto(DataFrameWriter.scala:263)
at org.apache.spark.sql.DataFrameWriter.insertInto(DataFrameWriter.scala:243)
at com.streamanalytix.spark.processor.HiveEmitter.persistRDDToHive(HiveEmitter.java:690)
at com.streamanalytix.spark.processor.HiveEmitter.executeWithRDD(HiveEmitter.java:395)
at com.streamanalytix.spark.core.AbstractProcessor.processRDDMap(AbstractProcessor.java:227)
at com.streamanalytix.spark.core.pipeline.SparkBatchSubmitter.definePipelineFlow(SparkBatchSubmitter.java:353)
at com.streamanalytix.spark.core.pipeline.SparkBatchSubmitter.getContext(SparkBatchSubmitter.java:302)
at com.streamanalytix.spark.core.pipeline.SparkBatchSubmitter.submit(SparkBatchSubmitter.java:93)
at com.streamanalytix.deploy.SaxDriver.main(SaxDriver.java:34)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)