- last edited on
when I run it in spark-shell, it will execute fine, but when I run the same statement in scala program it error out as "overloaded method value".
After research I found that in scala we need to pass rdd as an argument instead of dataframe.
is there a way to make Spark-Shell statment to run in scala without converting it as an rdd.
Note: because my dataframe has both straight column values and json columns. above statement takes only json value as input.
Help is much appreciated.