- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Spark SQL as a Federated DB in Production?
- Labels:
-
Apache Spark
Created ‎03-29-2016 12:44 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
There are some great articles and threads here on HCC about using Spark to query data from other JDBC sources and mash them up with anything else you can get into an RDD. Has anyone seen this pattern (Spark as a Federated DB including JDBC sources) actually used in Production (with JDBC thrift server)? What is the right configuration within a secure, multi-tenant Hadoop cluster?
Created ‎03-29-2016 04:14 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Vadim I have seen SparkSQL used in production for pulling RDBMS data. I have not seen it yet using kerberos environment, will follow thread for secure configurations.
Created ‎03-29-2016 04:14 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Vadim I have seen SparkSQL used in production for pulling RDBMS data. I have not seen it yet using kerberos environment, will follow thread for secure configurations.
Created ‎03-29-2016 04:27 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@azeltov How large is the implementation (How many tables, how much is cached vs read through to source)? In the case where they are caching the entire table, how do they ensure the data is not stale? Are the tables temporary or saved in the hive meta store?
