- last edited on
hello cloudera community,
we are having problems using livy with job spark to read hive by jupyter notebook
when we run a simple query, for example:
"spark.sql("show show databases").show()"
returns the error below"org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient"
Could you help us with this setup?
ps: we are using cdh 5.16.x
solved the problem by pointing hive-site.xml file in spark and spark2
so spark jobs in livy in jupyter notebook ran successfully
View solution in original post