In the case of
I want to access SAP HANA VORA tables in a sparkR and create models on VORA tables.
I am able to find code for hive table(HiveContext <- sparkRHive.init (sc)). Can somebody help me to understand how to create a VORA context.
Thank you for looking into my problem.
Vishal, you should be able to access VORA's with HANA's JDBC driver. You just need to then map it to a PySpark or SParkR context. Look it up, it's easy to find online... or let me know if you're stuck.
Thank you for the reply.
I ran the below function but the function is only available from 2.0
data <- read.jdbc(jdbcurl,"tablename", user ="user", password ="password")
I am currently using spark1.6.1 and can not upgrade.
Do you know any other alternative solution.
Hi, @vishal kuchhal
If you can connect via 2.0, it seems that you can do 1.6.1, too.
The following is supported Apache Spark 1.6.1.
df <- loadDF(sqlContext, source="jdbc", url="jdbc:postgresql:dbserver", dbtable="schema.tablename")
val jdbcDF = sqlContext.read.format("jdbc").options( Map("url" -> "jdbc:postgresql:dbserver", "dbtable" -> "schema.tablename")).load()
CREATE TEMPORARY TABLE jdbcTable USING org.apache.spark.sql.jdbc OPTIONS ( url "jdbc:postgresql:dbserver", dbtable "schema.tablename" )