Hi guys, I'm developing a Hadoop Project and I need to use SAS at the end. I already use PIG and Hive for data transformation and I have my data at one big table in Hive. It is mandatory to use the Spark and integrate the data into SAS in this project, and as I can see both are very good to data analytics. But what I'm thonking is: Both have the same goal at the end? So my question is: what type of operations can I do with Spark to a better ingestion in SAS? Thanks
If your data is already in hive table, you can connect your SAS application server directly using JDBC connection to Hive server.