Member since
08-06-2013
12
Posts
5
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
54364 | 05-07-2015 09:04 AM | |
28458 | 04-08-2015 04:39 PM |
02-26-2016
12:19 PM
Hey Craig- Spark's HiveContext requires the use of *some* metastore. In this case, since you're not specifying one, it's creating the default, file-based metastore_db. Here's some more details: https://github.com/apache/spark/blob/99dfcedbfd4c83c7b6a343456f03e8c6e29968c5/examples/src/main/scala/org/apache/spark/examples/sql/hive/HiveFromSpark.scala#L42 http://spark.apache.org/docs/latest/sql-programming-guide.html#hive-tables Few options: 1) make sure the location is writable by your Spark processes 2) configure the hive-site.xml to place the file in a diff location 3) move to MySQL or equivalent for true metastore functionality (might be needed elsewhere)
... View more
01-15-2016
07:37 AM
I think I figured this out. In this case, I needed to use the other flavor of explode in the second operation: val ydf = xdf.explode("nar", "gname") { nar: Seq[String] => nar } Always happens, as soon as you ask the question publicly...
... View more
04-15-2015
09:58 PM
Oh, I finally do it, follow is my hql: SELECT id, part.lock, part.key FROM mytable EXTERNAL VIEW explode(parts) parttable AS part; many thanks chrisf !
... View more