Hi,
I couldnot access the hive tables from pyspark shell. We are using cloudera CDH 5.13 and spark 2.2
I have list of tables in hive databases. But couldnot acess from spark shell.
Please find attached snapshot for more details.


For workaround, i checked the hive-site.xml, spark-env.sh. everything seems to be correct.
Which hive database is used by pyspark in this case and why it is different from hive? How can i point the spark to the correct Hive database and access a1, a2, a3 tables of hive.
Note : if i Create new table from spark shell, then only i can access it.
Looking forward for your expert advice.