Created 03-29-2016 12:44 PM
There are some great articles and threads here on HCC about using Spark to query data from other JDBC sources and mash them up with anything else you can get into an RDD. Has anyone seen this pattern (Spark as a Federated DB including JDBC sources) actually used in Production (with JDBC thrift server)? What is the right configuration within a secure, multi-tenant Hadoop cluster?
Created 03-29-2016 04:14 PM
@Vadim I have seen SparkSQL used in production for pulling RDBMS data. I have not seen it yet using kerberos environment, will follow thread for secure configurations.
Created 03-29-2016 04:14 PM
@Vadim I have seen SparkSQL used in production for pulling RDBMS data. I have not seen it yet using kerberos environment, will follow thread for secure configurations.
Created 03-29-2016 04:27 PM
@azeltov How large is the implementation (How many tables, how much is cached vs read through to source)? In the case where they are caching the entire table, how do they ensure the data is not stale? Are the tables temporary or saved in the hive meta store?