<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Spark Weird Error in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Weird-Error/m-p/138245#M52055</link>
    <description>&lt;P&gt;
	pretty low level. Looking into the source, it looks like the assertion is that &lt;/P&gt;
&lt;PRE&gt;assert(expectedAttrs.length == attrs.length)
&lt;/PRE&gt;&lt;P&gt;
	What does that mean? I'm not entirely sure. Looking through google shows up &lt;/P&gt;&lt;P&gt;
	1. Stack overflow &lt;A href="http://stackoverflow.com/questions/38740862/not-able-to-fetch-result-from-hive-transaction-enabled-table-through-spark-sql"&gt;http://stackoverflow.com/questions/38740862/not-able-to-fetch-result-from-hive-transaction-enabled-table-through-spark-sql&lt;/A&gt;&lt;/P&gt;&lt;P&gt;2. &lt;A href="https://issues.apache.org/jira/browse/SPARK-18355"&gt;SPARK-18355&lt;/A&gt;:Spark SQL fails to read data from a ORC hive table that has a new column added to it&lt;/P&gt;&lt;P&gt;If #2 is the cause, there's no obvious workaround right now. There's some details on #1 on maybe how to avoid the problem&lt;/P&gt;</description>
    <pubDate>Fri, 20 Jan 2017 22:02:12 GMT</pubDate>
    <dc:creator>stevel</dc:creator>
    <dc:date>2017-01-20T22:02:12Z</dc:date>
    <item>
      <title>Spark Weird Error</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Weird-Error/m-p/138244#M52054</link>
      <description>&lt;P&gt;import org.apache.spark.SparkContext &lt;/P&gt;&lt;P&gt;import org.apache.spark.sql.SQLContext &lt;/P&gt;&lt;P&gt;   val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc) &lt;/P&gt;&lt;P&gt;val df = sqlContext.table("tablename") &lt;/P&gt;&lt;P&gt;df.select("location").show(5)&lt;/P&gt;&lt;P&gt;java.lang.AssertionError: assertion failed
	at scala.Predef$.assert(Predef.scala:165)
	at org.apache.spark.sql.execution.datasources.LogicalRelation$anonfun$1.apply(LogicalRelation.scala:39)
	at org.apache.spark.sql.execution.datasources.LogicalRelation$anonfun$1.apply(LogicalRelation.scala:38)
	at scala.Option.map(Option.scala:145)
	at org.apache.spark.sql.execution.datasources.LogicalRelation.&amp;lt;init&amp;gt;(LogicalRelation.scala:38)
	at org.apache.spark.sql.execution.datasources.LogicalRelation.copy(LogicalRelation.scala:31)
	at org.apache.spark.sql.hive.HiveMetastoreCatalog.org$apache$spark$sql$hive$HiveMetastoreCatalog$convertToOrcRelation(HiveMetastoreCatalog.scala:588)
	at org.apache.spark.sql.hive.HiveMetastoreCatalog$OrcConversions$anonfun$apply$2.applyOrElse(HiveMetastoreCatalog.scala:647)
	at org.apache.spark.sql.hive.HiveMetastoreCatalog$OrcConversions$anonfun$apply$2.applyOrElse(HiveMetastoreCatalog.scala:643)
	at org.apache.spark.sql.catalyst.trees.TreeNode$anonfun$transformUp$1.apply(TreeNode.scala:335)
	at org.apache.spark.sql.catalyst.trees.TreeNode$anonfun$transformUp$1.apply(TreeNode.scala:335)
	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:69)
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:334)
	at org.apache.spark.sql.hive.HiveMetastoreCatalog$OrcConversions$.apply(HiveMetastoreCatalog.scala:643)
	at org.apache.spark.sql.hive.HiveMetastoreCatalog$OrcConversions$.apply(HiveMetastoreCatalog.scala:637)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor$anonfun$execute$1$anonfun$apply$1.apply(RuleExecutor.scala:83)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor$anonfun$execute$1$anonfun$apply$1.apply(RuleExecutor.scala:80)
	at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:111)
	at scala.collection.immutable.List.foldLeft(List.scala:84)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor$anonfun$execute$1.apply(RuleExecutor.scala:80)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor$anonfun$execute$1.apply(RuleExecutor.scala:72)
	at scala.collection.immutable.List.foreach(List.scala:318)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:72)
	at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:36)
	at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:36)
	at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:34)
	at org.apache.spark.sql.DataFrame.&amp;lt;init&amp;gt;(DataFrame.scala:133)
	at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52)
	at org.apache.spark.sql.SQLContext.table(SQLContext.scala:831)
	at org.apache.spark.sql.SQLContext.table(SQLContext.scala:827)&lt;/P&gt;</description>
      <pubDate>Fri, 20 Jan 2017 10:36:15 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Weird-Error/m-p/138244#M52054</guid>
      <dc:creator>TimothySpann</dc:creator>
      <dc:date>2017-01-20T10:36:15Z</dc:date>
    </item>
    <item>
      <title>Re: Spark Weird Error</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Weird-Error/m-p/138245#M52055</link>
      <description>&lt;P&gt;
	pretty low level. Looking into the source, it looks like the assertion is that &lt;/P&gt;
&lt;PRE&gt;assert(expectedAttrs.length == attrs.length)
&lt;/PRE&gt;&lt;P&gt;
	What does that mean? I'm not entirely sure. Looking through google shows up &lt;/P&gt;&lt;P&gt;
	1. Stack overflow &lt;A href="http://stackoverflow.com/questions/38740862/not-able-to-fetch-result-from-hive-transaction-enabled-table-through-spark-sql"&gt;http://stackoverflow.com/questions/38740862/not-able-to-fetch-result-from-hive-transaction-enabled-table-through-spark-sql&lt;/A&gt;&lt;/P&gt;&lt;P&gt;2. &lt;A href="https://issues.apache.org/jira/browse/SPARK-18355"&gt;SPARK-18355&lt;/A&gt;:Spark SQL fails to read data from a ORC hive table that has a new column added to it&lt;/P&gt;&lt;P&gt;If #2 is the cause, there's no obvious workaround right now. There's some details on #1 on maybe how to avoid the problem&lt;/P&gt;</description>
      <pubDate>Fri, 20 Jan 2017 22:02:12 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Weird-Error/m-p/138245#M52055</guid>
      <dc:creator>stevel</dc:creator>
      <dc:date>2017-01-20T22:02:12Z</dc:date>
    </item>
    <item>
      <title>Re: Spark Weird Error</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Weird-Error/m-p/138246#M52056</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/9304/tspann.html" nodeid="9304"&gt;@Timothy Spann&lt;/A&gt; Can you confirm if #2 is your case? if yes, I've got a workaround.&lt;/P&gt;</description>
      <pubDate>Wed, 25 Jan 2017 04:39:35 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Weird-Error/m-p/138246#M52056</guid>
      <dc:creator>sandyy006</dc:creator>
      <dc:date>2017-01-25T04:39:35Z</dc:date>
    </item>
    <item>
      <title>Re: Spark Weird Error</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Weird-Error/m-p/138247#M52057</link>
      <description>&lt;P&gt;it's number 1&lt;/P&gt;</description>
      <pubDate>Wed, 25 Jan 2017 05:02:50 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Weird-Error/m-p/138247#M52057</guid>
      <dc:creator>TimothySpann</dc:creator>
      <dc:date>2017-01-25T05:02:50Z</dc:date>
    </item>
    <item>
      <title>Re: Spark Weird Error</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Weird-Error/m-p/138248#M52058</link>
      <description>&lt;P&gt;Sandeep, We're facing similar issues while using Zeppelin and believe it's because of #2: &lt;/P&gt;&lt;P style="margin-left: 20px;"&gt;%sql


select col1, col2 from db.orc_table where col2 = "val" and col1 = 12345

java.lang.AssertionError: assertion failed
	at scala.Predef$.assert(Predef.scala:165)
	at org.apache.spark.sql.execution.datasources.LogicalRelation$anonfun$1.apply(LogicalRelation.scala:39)
	at org.apache.spark.sql.execution.datasources.LogicalRelation$anonfun$1.apply(LogicalRelation.scala:38)
	at scala.Option.map(Option.scala:145)
	at org.apache.spark.sql.execution.datasources.LogicalRelation.&amp;lt;init&amp;gt;(LogicalRelation.scala:38)
	at org.apache.spark.sql.execution.datasources.LogicalRelation.copy(LogicalRelation.scala:31)
	at org.apache.spark.sql.hive.HiveMetastoreCatalog.org$apache$spark$sql$hive$HiveMetastoreCatalog$convertToOrcRelation(HiveMetastoreCatalog.scala:588)
	at org.apache.spark.sql.hive.HiveMetastoreCatalog$OrcConversions$anonfun$apply$2.applyOrElse(HiveMetastoreCatalog.scala:647)
	at org.apache.spark.sql.hive.HiveMetastoreCatalog$OrcConversions$anonfun$apply$2.applyOrElse(HiveMetastoreCatalog.scala:643)
&lt;/P&gt;&lt;P&gt;Can you please share your workaround?&lt;/P&gt;</description>
      <pubDate>Fri, 10 Feb 2017 01:02:39 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Weird-Error/m-p/138248#M52058</guid>
      <dc:creator>jbarney</dc:creator>
      <dc:date>2017-02-10T01:02:39Z</dc:date>
    </item>
    <item>
      <title>Re: Spark Weird Error</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Weird-Error/m-p/138249#M52059</link>
      <description>&lt;A rel="user" href="https://community.cloudera.com/users/1408/jamesmbarney.html" nodeid="1408"&gt;@James Barney&lt;/A&gt;&lt;P&gt;Just realised that workaround for 1 and 2 are same set "spark.sql.hive.convertMetastoreOrc", "false" &lt;/P&gt;</description>
      <pubDate>Fri, 10 Feb 2017 02:05:19 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Weird-Error/m-p/138249#M52059</guid>
      <dc:creator>sandyy006</dc:creator>
      <dc:date>2017-02-10T02:05:19Z</dc:date>
    </item>
    <item>
      <title>Re: Spark Weird Error</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Weird-Error/m-p/138250#M52060</link>
      <description>&lt;P&gt;Thank you! for future reference in zeppelin: you set this attribute in the interpreter configuration, not in the paragraph where the sql is being executed.&lt;/P&gt;</description>
      <pubDate>Fri, 10 Feb 2017 02:44:36 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Weird-Error/m-p/138250#M52060</guid>
      <dc:creator>jbarney</dc:creator>
      <dc:date>2017-02-10T02:44:36Z</dc:date>
    </item>
  </channel>
</rss>

