hello cloudera community,
we are having problems using livy with job spark to read hive by jupyter notebook
when we run a simple query, for example:
"spark.sql("show show databases").show()"
returns the error below
"org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient"
Could you help us with this setup?
ps: we are using cdh 5.16.x