Created 02-02-2017 06:34 AM
In the case of
Hi,
I want to access SAP HANA VORA tables in a sparkR and create models on VORA tables.
I am able to find code for hive table(HiveContext <- sparkRHive.init (sc)). Can somebody help me to understand how to create a VORA context.
Thank you for looking into my problem.
Regards
Vishal kuchhal
Created 02-02-2017 07:27 AM
Vishal, you should be able to access VORA's with HANA's JDBC driver. You just need to then map it to a PySpark or SParkR context. Look it up, it's easy to find online... or let me know if you're stuck.
Created 02-02-2017 07:39 AM
Thank you for the reply.
I ran the below function but the function is only available from 2.0
data <- read.jdbc(jdbcurl,"tablename", user ="user", password ="password")
I am currently using spark1.6.1 and can not upgrade.
Do you know any other alternative solution.
Created 02-07-2017 07:44 AM
Hi, @vishal kuchhal
If you can connect via 2.0, it seems that you can do 1.6.1, too.
The following is supported Apache Spark 1.6.1.
R: http://spark.apache.org/docs/1.6.1/sql-programming-guide.html#tab_r_15
Scala: http://spark.apache.org/docs/1.6.1/sql-programming-guide.html#tab_scala_15
SQL: http://spark.apache.org/docs/1.6.1/sql-programming-guide.html#tab_sql_15
df <- loadDF(sqlContext, source="jdbc", url="jdbc:postgresql:dbserver", dbtable="schema.tablename")
val jdbcDF = sqlContext.read.format("jdbc").options( Map("url" -> "jdbc:postgresql:dbserver", "dbtable" -> "schema.tablename")).load()
CREATE TEMPORARY TABLE jdbcTable USING org.apache.spark.sql.jdbc OPTIONS ( url "jdbc:postgresql:dbserver", dbtable "schema.tablename" )