We need to hook up Flume with RDBMS (DB2) so that whatever updates happen on a table in DB2, the data is transferred to hdfs and then to a table in Hive.
So will this configuration still involve Kafka?
Or can Flume directly interact with a RDBMS?
Also can Flume directly interact with Hive or we need the data to be written into HDFS and then for Hive to read it (using load data inpath or 'location' option in 'create table' etc) or through Scoop?
Appreciate the insights.