Support Questions
Find answers, ask questions, and share your expertise

how to access SAP VORA tables in SparkR

how to access SAP VORA tables in SparkR

New Contributor

In the case of

Hi,

I want to access SAP HANA VORA tables in a sparkR and create models on VORA tables.

I am able to find code for hive table(HiveContext <- sparkRHive.init (sc)). Can somebody help me to understand how to create a VORA context.

Thank you for looking into my problem.

Regards

Vishal kuchhal

3 REPLIES 3
Highlighted

Re: how to access SAP VORA tables in SparkR

Explorer

Vishal, you should be able to access VORA's with HANA's JDBC driver. You just need to then map it to a PySpark or SParkR context. Look it up, it's easy to find online... or let me know if you're stuck.

Highlighted

Re: how to access SAP VORA tables in SparkR

New Contributor

Thank you for the reply.

I ran the below function but the function is only available from 2.0

data <- read.jdbc(jdbcurl,"tablename", user ="user", password ="password")
I am currently using spark1.6.1 and can not upgrade.
Do you know any other alternative solution.
Highlighted

Re: how to access SAP VORA tables in SparkR

Expert Contributor

Hi, @vishal kuchhal

If you can connect via 2.0, it seems that you can do 1.6.1, too.

The following is supported Apache Spark 1.6.1.

R: http://spark.apache.org/docs/1.6.1/sql-programming-guide.html#tab_r_15

Scala: http://spark.apache.org/docs/1.6.1/sql-programming-guide.html#tab_scala_15

SQL: http://spark.apache.org/docs/1.6.1/sql-programming-guide.html#tab_sql_15

df <- loadDF(sqlContext, source="jdbc", url="jdbc:postgresql:dbserver", dbtable="schema.tablename")
val jdbcDF = sqlContext.read.format("jdbc").options(
  Map("url" -> "jdbc:postgresql:dbserver",
  "dbtable" -> "schema.tablename")).load()
CREATE TEMPORARY TABLE jdbcTable
USING org.apache.spark.sql.jdbc
OPTIONS (
  url "jdbc:postgresql:dbserver",
  dbtable "schema.tablename"
)
Don't have an account?