<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Failing to save dataframe to in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Failing-to-save-dataframe-to/m-p/50912#M54417</link>
    <description>I'll give credit where it is due. I found this over on SO. This is handy and I could have used it in the past.&lt;BR /&gt;&lt;BR /&gt;SPARK_PRINT_LAUNCH_COMMAND=true spark-shell&lt;BR /&gt;&lt;BR /&gt;SPARK_PRINT_LAUNCH_COMMAND=true spark-submit ...&lt;BR /&gt;&lt;BR /&gt;This will output the full command to stdout, to include the classpath. Search the CP for the hive-exec*.jar. That contains the method for loading dynamic partitions.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://stackoverflow.com/questions/30512598/spark-is-there-a-way-to-print-out-classpath-of-both-spark-shell-and-spark" target="_blank"&gt;http://stackoverflow.com/questions/30512598/spark-is-there-a-way-to-print-out-classpath-of-both-spark-shell-and-spark&lt;/A&gt;</description>
    <pubDate>Tue, 14 Feb 2017 22:44:39 GMT</pubDate>
    <dc:creator>mbigelow</dc:creator>
    <dc:date>2017-02-14T22:44:39Z</dc:date>
    <item>
      <title>Failing to save dataframe to</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Failing-to-save-dataframe-to/m-p/50909#M54414</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I'm trying to write a DataFrame to a Hive partitioned table. This works fine from spark-shell, however when I use spark-submit i get the following&lt;/P&gt;&lt;P&gt;exception:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Exception in thread "main" java.lang.NoSuchMethodException:&lt;/P&gt;&lt;P&gt;org.apache.hadoop.hive.ql.metadata.Hive.loadDynamicPartitions(org.apache.hadoop.fs.Path,&lt;/P&gt;&lt;P&gt;java.lang.String, java.util.Map, boolean, int, boolean, boolean, boolean)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at java.lang.Class.getMethod(Class.java:1665)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at&lt;/P&gt;&lt;P&gt;org.apache.spark.sql.hive.client.Shim.findMethod(HiveShim.scala:114)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at&lt;/P&gt;&lt;P&gt;org.apache.spark.sql.hive.client.Shim_v0_14.loadDynamicPartitionsMethod$lzycompute(HiveShim.scala:404)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at&lt;/P&gt;&lt;P&gt;org.apache.spark.sql.hive.client.Shim_v0_14.loadDynamicPartitionsMethod(HiveShim.scala:403)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at&lt;/P&gt;&lt;P&gt;org.apache.spark.sql.hive.client.Shim_v0_14.loadDynamicPartitions(HiveShim.scala:455)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at&lt;/P&gt;&lt;P&gt;org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$loadDynamicPartitions$1.apply$mcV$sp(ClientWrapper.scala:562)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at&lt;/P&gt;&lt;P&gt;org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$loadDynamicPartitions$1.apply(ClientWrapper.scala:562)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at&lt;/P&gt;&lt;P&gt;org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$loadDynamicPartitions$1.apply(ClientWrapper.scala:562)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at&lt;/P&gt;&lt;P&gt;org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$withHiveState$1.apply(ClientWrapper.scala:281)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at&lt;/P&gt;&lt;P&gt;org.apache.spark.sql.hive.client.ClientWrapper.liftedTree1$1(ClientWrapper.scala:228)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at&lt;/P&gt;&lt;P&gt;org.apache.spark.sql.hive.client.ClientWrapper.retryLocked(ClientWrapper.scala:227)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at&lt;/P&gt;&lt;P&gt;org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:270)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at&lt;/P&gt;&lt;P&gt;org.apache.spark.sql.hive.client.ClientWrapper.loadDynamicPartitions(ClientWrapper.scala:561)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at&lt;/P&gt;&lt;P&gt;org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult$lzycompute(InsertIntoHiveTable.scala:225)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at&lt;/P&gt;&lt;P&gt;org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult(InsertIntoHiveTable.scala:127)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at&lt;/P&gt;&lt;P&gt;org.apache.spark.sql.hive.execution.InsertIntoHiveTable.doExecute(InsertIntoHiveTable.scala:276)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at&lt;/P&gt;&lt;P&gt;org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at&lt;/P&gt;&lt;P&gt;org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at&lt;/P&gt;&lt;P&gt;org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at&lt;/P&gt;&lt;P&gt;org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at&lt;/P&gt;&lt;P&gt;org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at&lt;/P&gt;&lt;P&gt;org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at&lt;/P&gt;&lt;P&gt;org.apache.spark.sql.DataFrameWriter.insertInto(DataFrameWriter.scala:189)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at&lt;/P&gt;&lt;P&gt;org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:239)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at&lt;/P&gt;&lt;P&gt;org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:221)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at com.pelephone.TrueCallLoader$.main(TrueCallLoader.scala:175)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at com.pelephone.TrueCallLoader.main(TrueCallLoader.scala)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at&lt;/P&gt;&lt;P&gt;sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at&lt;/P&gt;&lt;P&gt;sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at java.lang.reflect.Method.invoke(Method.java:606)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at&lt;/P&gt;&lt;P&gt;org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at&lt;/P&gt;&lt;P&gt;org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at&lt;/P&gt;&lt;P&gt;org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Can you help me finding the problem?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Nimrod&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 11:05:27 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Failing-to-save-dataframe-to/m-p/50909#M54414</guid>
      <dc:creator>nimrodor</dc:creator>
      <dc:date>2022-09-16T11:05:27Z</dc:date>
    </item>
    <item>
      <title>Re: Failing to save dataframe to</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Failing-to-save-dataframe-to/m-p/50910#M54415</link>
      <description>On the surface, it just seems to be a classpath issue, and that is why there is a difference between the shell and running on the cluster.&lt;BR /&gt;&lt;BR /&gt;In which mode did you launch the job?&lt;BR /&gt;&lt;BR /&gt;Are you using the SQLContext or HiveContext?&lt;BR /&gt;&lt;BR /&gt;Did you set these setting in the HiveContext if used?&lt;BR /&gt;&lt;BR /&gt;SET hive.exec.dynamic.partition=true; SET hive.exec.max.dynamic.partitions=2048&lt;BR /&gt;ET hive.exec.dynamic.partition.mode=non-strict;</description>
      <pubDate>Tue, 14 Feb 2017 22:29:39 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Failing-to-save-dataframe-to/m-p/50910#M54415</guid>
      <dc:creator>mbigelow</dc:creator>
      <dc:date>2017-02-14T22:29:39Z</dc:date>
    </item>
    <item>
      <title>Re: Failing to save dataframe to</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Failing-to-save-dataframe-to/m-p/50911#M54416</link>
      <description>&lt;P&gt;It's yarn client mode and I'm using a HiveContext with all those parameters set.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Nimrod&lt;/P&gt;</description>
      <pubDate>Tue, 14 Feb 2017 22:33:40 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Failing-to-save-dataframe-to/m-p/50911#M54416</guid>
      <dc:creator>nimrodor</dc:creator>
      <dc:date>2017-02-14T22:33:40Z</dc:date>
    </item>
    <item>
      <title>Re: Failing to save dataframe to</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Failing-to-save-dataframe-to/m-p/50912#M54417</link>
      <description>I'll give credit where it is due. I found this over on SO. This is handy and I could have used it in the past.&lt;BR /&gt;&lt;BR /&gt;SPARK_PRINT_LAUNCH_COMMAND=true spark-shell&lt;BR /&gt;&lt;BR /&gt;SPARK_PRINT_LAUNCH_COMMAND=true spark-submit ...&lt;BR /&gt;&lt;BR /&gt;This will output the full command to stdout, to include the classpath. Search the CP for the hive-exec*.jar. That contains the method for loading dynamic partitions.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://stackoverflow.com/questions/30512598/spark-is-there-a-way-to-print-out-classpath-of-both-spark-shell-and-spark" target="_blank"&gt;http://stackoverflow.com/questions/30512598/spark-is-there-a-way-to-print-out-classpath-of-both-spark-shell-and-spark&lt;/A&gt;</description>
      <pubDate>Tue, 14 Feb 2017 22:44:39 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Failing-to-save-dataframe-to/m-p/50912#M54417</guid>
      <dc:creator>mbigelow</dc:creator>
      <dc:date>2017-02-14T22:44:39Z</dc:date>
    </item>
    <item>
      <title>Re: Failing to save dataframe to</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Failing-to-save-dataframe-to/m-p/50937#M54418</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I did what you suggested but it seems that both are using the same jar:&lt;/P&gt;&lt;P&gt;/opt/cloudera/parcels/CDH-5.8.2-1.cdh5.8.2.p0.3/jars/hive-exec-1.1.0-cdh5.8.2.jar&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I could not find any difference in the classpath at all.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Nimrod&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 15 Feb 2017 09:51:54 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Failing-to-save-dataframe-to/m-p/50937#M54418</guid>
      <dc:creator>nimrodor</dc:creator>
      <dc:date>2017-02-15T09:51:54Z</dc:date>
    </item>
    <item>
      <title>Re: Failing to save dataframe to</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Failing-to-save-dataframe-to/m-p/51010#M54419</link>
      <description>I replaced the saveastable with hivecontext.sql and it worked.&lt;BR /&gt;&lt;BR /&gt;Thanks!</description>
      <pubDate>Thu, 16 Feb 2017 13:27:35 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Failing-to-save-dataframe-to/m-p/51010#M54419</guid>
      <dc:creator>nimrodor</dc:creator>
      <dc:date>2017-02-16T13:27:35Z</dc:date>
    </item>
    <item>
      <title>Re: Failing to save dataframe to</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Failing-to-save-dataframe-to/m-p/58247#M54420</link>
      <description>I dont think thats a fix for the issue.</description>
      <pubDate>Mon, 31 Jul 2017 13:20:46 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Failing-to-save-dataframe-to/m-p/58247#M54420</guid>
      <dc:creator>DeshSingh</dc:creator>
      <dc:date>2017-07-31T13:20:46Z</dc:date>
    </item>
    <item>
      <title>Re: Failing to save dataframe to</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Failing-to-save-dataframe-to/m-p/59497#M54421</link>
      <description>&lt;P&gt;I am having the same problem..&lt;/P&gt;&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/18127"&gt;@mbigelow&lt;/a&gt; can you kindly provide some guidance as to how to initiate a hivecontext properly in an IDE like IntelliJ or Eclipse?&lt;/P&gt;</description>
      <pubDate>Fri, 01 Sep 2017 10:30:54 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Failing-to-save-dataframe-to/m-p/59497#M54421</guid>
      <dc:creator>anirbandd</dc:creator>
      <dc:date>2017-09-01T10:30:54Z</dc:date>
    </item>
  </channel>
</rss>

