Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Spark3 connection to HIVE ACID Tables

avatar
Explorer

Hi guys,

I have a Data lake (Hive Managed tables base) and I would like to do an Incremental approach to the Warehouse (Hive Managed tables base) using spark v3.2, and I faced an issue connection to Hive Managed tables with spark3.

 

And I would like to know,

How to connect to Hive ACID Tables? Using JDBC if yes how? Or there are other ways?

 

Thank you!

1 ACCEPTED SOLUTION

avatar
Master Collaborator

Hi @Asim- Hive Warehouse Connector (HWC) securely accesses Hive-managed (ACID Tables) from Spark. You need to use HWC software to query Apache Hive-managed tables from Apache Spark.

 

As of now, HWC supports Spark2 in CDP 7.1.7.  

 

HWC is not yet a supported feature for Spark3.2 / CDS 3.2 in CDP 7.1.7.

 

We are expecting HWC for Spark3 to be included in our upcoming CDS 3.3 in CDP 7.1.8. 

View solution in original post

8 REPLIES 8

avatar
Master Collaborator

Hi @Asim- Hive Warehouse Connector (HWC) securely accesses Hive-managed (ACID Tables) from Spark. You need to use HWC software to query Apache Hive-managed tables from Apache Spark.

 

As of now, HWC supports Spark2 in CDP 7.1.7.  

 

HWC is not yet a supported feature for Spark3.2 / CDS 3.2 in CDP 7.1.7.

 

We are expecting HWC for Spark3 to be included in our upcoming CDS 3.3 in CDP 7.1.8. 

avatar
Explorer

Thank you @jagadeesan for your reply,

 

As far as I know, HWC does not support INSERT/UPDATE in Hive ACID Tables,

Correct me if I'm wrong.

 

Also is there any way to connect to Hive ACID tables now for spark 3

instead of HWC.

 

Thank you!

avatar
Master Collaborator

@Asim-  Run CREATE, UPDATE, DELETE, INSERT, and MERGE statements in this way:

hive.executeUpdate("INSERT INTO table_name (column1, column2,...) VALUES (value1, value2,...)")

For more details, you can refer to HWC Read and write operations documentation

 

Other than HWC, we don't have any other way to connect Hive ACID tables from Apache Spark, as mentioned early we are expecting this feature will be released in our upcoming CDS 3.3 release. 

avatar
Explorer

Thank you @jagadeesan,

But, is it possible to connect to the Hive via JDBC from spark 3.x ?

avatar
Master Collaborator

Yes, for more details you can refer here

avatar
Explorer

Hi @jagadeesan ,

I am trying to connect to hive with spark3 via JDBC Hive driver (HiveJDBC42)

And I am getting the bellow error:

import org.apache.spark.sql.SparkSession

val spark = SparkSession.builder().appName("Spark - Hive").config("spark.sql.warehouse.dir", "/warehouse/tablespace/managed/hive").enableHiveSupport().getOrCreate()
val table_users = spark.read.format("jdbc"). 
              option("url","hive"). 
              option("url", "jdbc:hive2://127.0.0.1:2181:2181;password=****;principal=hive/_HOST@Example.com;serviceDiscoveryMode=zooKeeper;ssl=1;user=user1;zooKeeperNamespace=hiveserver2"). 
              option("driver","com.cloudera.hive.jdbc.HS2Driver"). 
              option("query","select * from test_db.users LIMIT 1").
              option("fetchsize","20"). 
              load()
java.sql.SQLException: [Cloudera][JDBC](11380) Null pointer exception.
  at com.cloudera.hiveserver2.hive.core.HiveJDBCConnection.setZookeeperServiceDiscovery(Unknown Source)
  at com.cloudera.hiveserver2.hive.core.HiveJDBCConnection.readServiceDiscoverySettings(Unknown Source)
  at com.cloudera.hiveserver2.hivecommon.core.HiveJDBCCommonConnection.readServiceDiscoverySettings(Unknown Source)
  at com.cloudera.hiveserver2.hivecommon.core.HiveJDBCCommonConnection.establishConnection(Unknown Source)
  at com.cloudera.hiveserver2.jdbc.core.LoginTimeoutConnection.connect(Unknown Source)
  at com.cloudera.hiveserver2.jdbc.common.BaseConnectionFactory.doConnect(Unknown Source)
  at com.cloudera.hiveserver2.jdbc.common.AbstractDriver.connect(Unknown Source)
  at org.apache.spark.sql.execution.datasources.jdbc.connection.BasicConnectionProvider.getConnection(BasicConnectionProvider.scala:49)
  at org.apache.spark.sql.execution.datasources.jdbc.connection.ConnectionProvider$.create(ConnectionProvider.scala:77)
  at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$createConnectionFactory$1(JdbcUtils.scala:64)
  at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.getQueryOutputSchema(JDBCRDD.scala:62)
  at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:57)
  at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation$.getSchema(JDBCRelation.scala:239)
  at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:36)
  at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:350)
  at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:274)
  at org.apache.spark.sql.DataFrameReader.$anonfun$load$3(DataFrameReader.scala:245)
  at scala.Option.getOrElse(Option.scala:189)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:245)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:174)
  ... 54 elided
Caused by: java.lang.NullPointerException

avatar
Master Collaborator

@Asim- JDBC also you need HWC for Managed tables. Here is the example for Spark2, but as mentioned earlier Spark3 we don't have any other way to connect Hive ACID tables from Apache Spark other than HWC and it is not yet a supported feature for Spark3.2 / CDS 3.2 in CDP 7.1.7. Marking this thread close, if you have any issues related to external tables kindly start a new Support-Questions thread for better tracking of the issue and documentation. Thanks

avatar
Community Manager

@Asim-, Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. 



Regards,

Vidya Sargur,
Community Manager


Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.
Learn more about the Cloudera Community: