Member since
11-22-2016
50
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2938 | 01-17-2017 02:54 PM |
10-05-2017
07:06 AM
Yes, metadata does store the columns name in its database. hive > show columns in table_name: hive> set hive.cli.print.header=true; To view the column names of your table.
... View more
10-05-2017
06:48 AM
how did you run the script? it didn't return me any result. can your share your's?
... View more
08-21-2017
09:53 AM
But this is not a suitable solution for production environment
... View more
04-04-2017
01:50 AM
How did u solved it ??? Which things one has to check ?
... View more
04-04-2017
01:48 AM
how did you solve it Max?
... View more
01-17-2017
02:54 PM
fixed it, like below df.withColumn("Timestamp_val",lit(current_timestamp)) As the second argument in the .withColumn() will expect a named column and val newDF=dataframe.withColumn("Timestamp_val",current_timestamp()) will not generate a named column.Hence the exception
... View more
01-17-2017
12:19 PM
Hi all, Here i'm trying to add time stamp to the data frame dynamically, like this messages.foreachRDD(rdd=>
74 {
75 val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
76 import sqlContext.implicits._
77 val dataframe =sqlContext.read.json(rdd.map(_._2)).toDF()
78 import org.apache.spark.sql.functions._
79 val newDF=dataframe.withColumn("Timestamp_val",current_timestamp())
80 newDF.show()
81 newDF.printSchema() But this code is giving me an headache, sometimes it is printing the schema and sometimes it is throwing this java.lang.IllegalArgumentException: requirement failed at scala.Predef$.require(Predef.scala:221) at org.apache.spark.sql.catalyst.analysis.UnresolvedStar.expand(unresolved.scala:199) at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences$$anonfun$apply$10$$anonfun$applyOrElse$14.apply(Analyzer.scala:354) at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences$$anonfun$apply$10$$anonfun$applyOrElse$14.apply(Analyzer.scala:353) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251) at scala.collection.AbstractTraversable.flatMap(Traversable.scala:105) at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences$$anonfun$apply$10.applyOrElse(Analyzer.scala:353) at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences$$anonfun$apply$10.applyOrElse(Analyzer.scala:347) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:57) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:57) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:69) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:56) at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences$.apply(Analyzer.scala:347) at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences$.apply(Analyzer.scala:328) at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:83) at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:80) at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:111) at scala.collection.immutable.List.foldLeft(List.scala:84) at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:80) at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:72) at scala.collection.immutable.List.foreach(List.scala:318) at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:72) at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:36) at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:36) at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:34) at org.apache.spark.sql.DataFrame.(DataFrame.scala:133) at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$withPlan(DataFrame.scala:2126) at org.apache.spark.sql.DataFrame.select(DataFrame.scala:707) at org.apache.spark.sql.DataFrame.withColumn(DataFrame.scala:1188) at HiveGenerator$$anonfun$main$1.apply(HiveGenerator.scala:79) at HiveGenerator$$anonfun$main$1.apply(HiveGenerator.scala:73) Where am i going wrong, please help.
... View more
Labels:
- Labels:
-
Apache Spark
01-16-2017
09:02 PM
Can i know which versions of hive and spark you are using?
... View more
01-14-2017
09:40 PM
which version spark are you using? assuming you are using 1.4v or higher. import org.apache.spark.sql.hive.HiveContext import sqlContext.implicits._ val hiveObj = new HiveContext(sc) hiveObj.refreshTable("db.table") // if you have uograded your hive do this, to refresh the tables. val sample = sqlContext.sql("select * from table").collect() sample.foreach(println) This has worked for me
... View more
01-14-2017
09:33 PM
Did this work for you? If not, please post the code which worked for you
... View more