Created on 01-25-2016 05:07 PM - edited 09-16-2022 02:59 AM
Hi,
I am trying to access the already existing table in hive by using spark shell
But when I run the instructions, error comes "table not found".
e.g. in hive table is existing name as "department" in default database.
i start the spark-shell and execute the following set of instructions.
import org.apache.spark.sql.hive.HiveContext
val sqlContext = new HiveContext(sc)
val depts = sqlContext.sql("select * from departments")
depts.collecat().foreach(println)
but it coudn't find the table.
Now My questions are:
1. As I know ny using HiveContext spark can access the hive metastore. But it is not doing here, so is there any configuration setup required? I am using Cloudera quickstart VM 5..5
2. As an alternative I created the table on spark-shell , load a data file and then performed some queries and then exit the spark shell.
3. even if I create the table using spark-shell, it is not anywhere existing when I am trying to access it using hive editor.
4. when i again start the spark-shell , then earlier table i created, was no longer existing, so exactly where this table and metadata is stored and all....
I am very much confused, because accroding to theortical concepts, it should go under the hive metastore.
Thanks & Regards