<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Cannot write Dataframe from Spark to Hive using Hive warehouse connector in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Cannot-write-Dataframe-from-Spark-to-Hive-using-Hive/m-p/238448#M200259</link>
    <description>&lt;P&gt;I'm trying to work with Spark and Hive for HDP 3.0. As I see in some articles, now we have to use Hive Warehouse Connector. Everything works fine except for one proble.&lt;/P&gt;&lt;P&gt;I can't write a dataframe directlyh from Spark to Hive using Hive Warehouse connector.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;I have the following code:&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;PRE&gt; val spark = SparkSession
&amp;nbsp; &amp;nbsp; &amp;nbsp; .builder()
&amp;nbsp; &amp;nbsp; &amp;nbsp; .appName("Spark Hive Test Job")
&amp;nbsp; &amp;nbsp; &amp;nbsp; .config("spark.sql.hive.hiveserver2.jdbc.url", "jdbc:hive2://s01.ndtstu.local:2181,m01.ndtstu.local:2181,m02.ndtstu.local:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;user=hive;password=hive?")
&amp;nbsp; &amp;nbsp; &amp;nbsp; .getOrCreate()
&amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;
&amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;
&amp;nbsp; &amp;nbsp; val hive = HiveWarehouseSession.session(spark).build()
&amp;nbsp; &amp;nbsp; hive.createDatabase("testADD", true);
&amp;nbsp; &amp;nbsp; hive.setDatabase("testADD")
&amp;nbsp; &amp;nbsp; hive.createTable("tabletest").ifNotExists()
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; .column("k", "int")
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; .column("value", "int")
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; .create()
&amp;nbsp; &amp;nbsp;&amp;nbsp;
&amp;nbsp; &amp;nbsp; import spark.implicits._
&amp;nbsp; &amp;nbsp;&amp;nbsp;
&amp;nbsp; &amp;nbsp; val x = Seq((4,4),(5,5)).toDF("k","value")
&amp;nbsp; &amp;nbsp;&amp;nbsp;
&amp;nbsp; &amp;nbsp; x.write
&amp;nbsp; &amp;nbsp; &amp;nbsp; .format("com.hortonworks.spark.sql.hive.llap.HiveWarehouseConnector")
&amp;nbsp; &amp;nbsp; &amp;nbsp; .mode("append")
&amp;nbsp; &amp;nbsp; &amp;nbsp; .option("database", "testADD")
&amp;nbsp; &amp;nbsp; &amp;nbsp; .option("table", "tabletest2")
&amp;nbsp; &amp;nbsp; &amp;nbsp; .save()&lt;/PRE&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;And I have the following error stack trace:&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;PRE&gt;19/04/16 13:09:37 ERROR WriteToDataSourceV2Exec: Data source writer com.hortonworks.spark.sql.hive.llap.HiveWarehouseDataSourceWriter@7057dbda is aborting.
19/04/16 13:09:37 ERROR HiveWarehouseDataSourceWriter: Aborted DataWriter job 20190416130935-a22961fa-7dee-4eed-a41c-7f67bbf3bdae
19/04/16 13:09:37 ERROR WriteToDataSourceV2Exec: Data source writer com.hortonworks.spark.sql.hive.llap.HiveWarehouseDataSourceWriter@7057dbda aborted.
Exception in thread "main" org.apache.spark.SparkException: Writing job aborted.
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.doExecute(WriteToDataSourceV2.scala:112)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:656)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:656)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:656)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:256)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at test.Test2$.main(Test2.scala:40)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at test.Test2.main(Test2.scala)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.lang.reflect.Method.invoke(Method.java:498)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:904)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 0.0 failed 4 times, most recent failure: Lost task 1.3 in stage 0.0 (TID 6, s05.ndtstu.local, executor 1): java.lang.AbstractMethodError: com.hortonworks.spark.sql.hive.llap.HiveWarehouseDataWriterFactory.createDataWriter(II)Lorg/apache/spark/sql/sources/v2/writer/DataWriter;
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.datasources.v2.InternalRowDataWriterFactory.createDataWriter(WriteToDataSourceV2.scala:191)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2.scala:129)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec$$anonfun$2.apply(WriteToDataSourceV2.scala:79)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec$$anonfun$2.apply(WriteToDataSourceV2.scala:78)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.scheduler.Task.run(Task.scala:109)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1651)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1639)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1638)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1638)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at scala.Option.foreach(Option.scala:257)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1872)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1821)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1810)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.SparkContext.runJob(SparkContext.scala:2034)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.doExecute(WriteToDataSourceV2.scala:82)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; ... 25 more
Caused by: java.lang.AbstractMethodError: com.hortonworks.spark.sql.hive.llap.HiveWarehouseDataWriterFactory.createDataWriter(II)Lorg/apache/spark/sql/sources/v2/writer/DataWriter;
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.datasources.v2.InternalRowDataWriterFactory.createDataWriter(WriteToDataSourceV2.scala:191)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2.scala:129)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec$$anonfun$2.apply(WriteToDataSourceV2.scala:79)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec$$anonfun$2.apply(WriteToDataSourceV2.scala:78)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.scheduler.Task.run(Task.scala:109)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.lang.Thread.run(Thread.java:745)&lt;/PRE&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;I can't not find any information regarding to this error&lt;/P&gt;</description>
    <pubDate>Tue, 16 Apr 2019 22:54:50 GMT</pubDate>
    <dc:creator>adinaret</dc:creator>
    <dc:date>2019-04-16T22:54:50Z</dc:date>
    <item>
      <title>Cannot write Dataframe from Spark to Hive using Hive warehouse connector</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Cannot-write-Dataframe-from-Spark-to-Hive-using-Hive/m-p/238448#M200259</link>
      <description>&lt;P&gt;I'm trying to work with Spark and Hive for HDP 3.0. As I see in some articles, now we have to use Hive Warehouse Connector. Everything works fine except for one proble.&lt;/P&gt;&lt;P&gt;I can't write a dataframe directlyh from Spark to Hive using Hive Warehouse connector.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;I have the following code:&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;PRE&gt; val spark = SparkSession
&amp;nbsp; &amp;nbsp; &amp;nbsp; .builder()
&amp;nbsp; &amp;nbsp; &amp;nbsp; .appName("Spark Hive Test Job")
&amp;nbsp; &amp;nbsp; &amp;nbsp; .config("spark.sql.hive.hiveserver2.jdbc.url", "jdbc:hive2://s01.ndtstu.local:2181,m01.ndtstu.local:2181,m02.ndtstu.local:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;user=hive;password=hive?")
&amp;nbsp; &amp;nbsp; &amp;nbsp; .getOrCreate()
&amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;
&amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;
&amp;nbsp; &amp;nbsp; val hive = HiveWarehouseSession.session(spark).build()
&amp;nbsp; &amp;nbsp; hive.createDatabase("testADD", true);
&amp;nbsp; &amp;nbsp; hive.setDatabase("testADD")
&amp;nbsp; &amp;nbsp; hive.createTable("tabletest").ifNotExists()
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; .column("k", "int")
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; .column("value", "int")
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; .create()
&amp;nbsp; &amp;nbsp;&amp;nbsp;
&amp;nbsp; &amp;nbsp; import spark.implicits._
&amp;nbsp; &amp;nbsp;&amp;nbsp;
&amp;nbsp; &amp;nbsp; val x = Seq((4,4),(5,5)).toDF("k","value")
&amp;nbsp; &amp;nbsp;&amp;nbsp;
&amp;nbsp; &amp;nbsp; x.write
&amp;nbsp; &amp;nbsp; &amp;nbsp; .format("com.hortonworks.spark.sql.hive.llap.HiveWarehouseConnector")
&amp;nbsp; &amp;nbsp; &amp;nbsp; .mode("append")
&amp;nbsp; &amp;nbsp; &amp;nbsp; .option("database", "testADD")
&amp;nbsp; &amp;nbsp; &amp;nbsp; .option("table", "tabletest2")
&amp;nbsp; &amp;nbsp; &amp;nbsp; .save()&lt;/PRE&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;And I have the following error stack trace:&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;PRE&gt;19/04/16 13:09:37 ERROR WriteToDataSourceV2Exec: Data source writer com.hortonworks.spark.sql.hive.llap.HiveWarehouseDataSourceWriter@7057dbda is aborting.
19/04/16 13:09:37 ERROR HiveWarehouseDataSourceWriter: Aborted DataWriter job 20190416130935-a22961fa-7dee-4eed-a41c-7f67bbf3bdae
19/04/16 13:09:37 ERROR WriteToDataSourceV2Exec: Data source writer com.hortonworks.spark.sql.hive.llap.HiveWarehouseDataSourceWriter@7057dbda aborted.
Exception in thread "main" org.apache.spark.SparkException: Writing job aborted.
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.doExecute(WriteToDataSourceV2.scala:112)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:656)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:656)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:656)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:256)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at test.Test2$.main(Test2.scala:40)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at test.Test2.main(Test2.scala)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.lang.reflect.Method.invoke(Method.java:498)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:904)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 0.0 failed 4 times, most recent failure: Lost task 1.3 in stage 0.0 (TID 6, s05.ndtstu.local, executor 1): java.lang.AbstractMethodError: com.hortonworks.spark.sql.hive.llap.HiveWarehouseDataWriterFactory.createDataWriter(II)Lorg/apache/spark/sql/sources/v2/writer/DataWriter;
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.datasources.v2.InternalRowDataWriterFactory.createDataWriter(WriteToDataSourceV2.scala:191)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2.scala:129)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec$$anonfun$2.apply(WriteToDataSourceV2.scala:79)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec$$anonfun$2.apply(WriteToDataSourceV2.scala:78)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.scheduler.Task.run(Task.scala:109)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1651)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1639)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1638)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1638)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at scala.Option.foreach(Option.scala:257)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1872)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1821)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1810)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.SparkContext.runJob(SparkContext.scala:2034)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.doExecute(WriteToDataSourceV2.scala:82)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; ... 25 more
Caused by: java.lang.AbstractMethodError: com.hortonworks.spark.sql.hive.llap.HiveWarehouseDataWriterFactory.createDataWriter(II)Lorg/apache/spark/sql/sources/v2/writer/DataWriter;
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.datasources.v2.InternalRowDataWriterFactory.createDataWriter(WriteToDataSourceV2.scala:191)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2.scala:129)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec$$anonfun$2.apply(WriteToDataSourceV2.scala:79)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec$$anonfun$2.apply(WriteToDataSourceV2.scala:78)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.scheduler.Task.run(Task.scala:109)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.lang.Thread.run(Thread.java:745)&lt;/PRE&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;I can't not find any information regarding to this error&lt;/P&gt;</description>
      <pubDate>Tue, 16 Apr 2019 22:54:50 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Cannot-write-Dataframe-from-Spark-to-Hive-using-Hive/m-p/238448#M200259</guid>
      <dc:creator>adinaret</dc:creator>
      <dc:date>2019-04-16T22:54:50Z</dc:date>
    </item>
    <item>
      <title>Re: Cannot write Dataframe from Spark to Hive using Hive warehouse connector</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Cannot-write-Dataframe-from-Spark-to-Hive-using-Hive/m-p/238449#M200260</link>
      <description>&lt;P&gt;Finally I find the solution. The problem was regarding to an incorrect version of the HWC jar.&lt;/P&gt;&lt;P&gt;We have to take into consideration that the HWC jar has to be compatible with the HDP version. In my case I was using maven with Eclipse to build the solution&lt;/P&gt;</description>
      <pubDate>Wed, 17 Apr 2019 16:40:20 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Cannot-write-Dataframe-from-Spark-to-Hive-using-Hive/m-p/238449#M200260</guid>
      <dc:creator>adinaret</dc:creator>
      <dc:date>2019-04-17T16:40:20Z</dc:date>
    </item>
  </channel>
</rss>

