Created on 12-23-202009:18 AM - edited on 12-23-202005:18 PM by subratadas
Sometimes you need real CDC and you have access to transaction change logs and you use a tool like QLIK REPLICATE or GoldenGate to pump out records to Kafka, and then Flink SQL or NiFi can read them and process them.
Other times you need something easier for just some basic changes and inserts to some tables you are interested in receiving new data as events. Apache NiFi can do this easily for you with QueryDatabaseTableRecord. You don't need to know anything, but the database connection information, table name and what field may change. NiFi will query, watch state and give you new records. Nothing is hardcoded, parameterize those values and you have a generic 'Any RDBMS' to 'Any Other Store' data pipeline. We are reading as records, which means each FlowFile in NiFi can have thousands of records that we know all the fields, types and schema related information for. This can be ones that NiFi infers the schema or ones we use to form a Schema Registry like Cloudera's amazing Open Source Schema Registry.
Let's see what data is in our PostgreSQL table:
QueryDatabaseTableRecord (we will output JSON records, but could have done Parquet, XML, CSV or AVRO)
UpdateAttribute- optional - set a table and schema name, can do with parameters as well.
MergeRecord- optional - let's batch these up.
PutORC- let's send these records to HDFS (which could be on bare metal disks, GCS, S3, Azure or ADLS). This will build us an external Hive table.
As you can see, we are looking at the "prices" table and checking maximum values to increment on the updated_on date and the item_id sequential key. We then output JSON records.
We could then:
PutHDFS(send as JSON, CSV, Parquet) and build an Impala or Hive table on top as external
PutHive3Streaming(Hive 3 ACID Tables)
PublishKafkaRecord_2_* - send a copy to Kafka for Flink SQL, Spark Streaming, Spring, etc...
PutDatabaseRecord- let's send to another JDBC Datastore
PutDruidRecord- Druid is a cool datastore, check it out on CDP Public Cloud
PutRecord (to many RecordSinkServices like Databases, Kafka, Prometheus, Scripted and Site-to-Site)
PutParquet(store to HDFS as Parquet files)
You can do any number or all of these or multiple copies of each to other clouds or clusters. You can also do enrichment, transformation, alerts, queries or routing.
These records can be also manipulated ETL/ELT style with Record processing in stream with options such as:
QueryRecord (use Calcite ANSI SQL to query and transform records and can also change output type)
JoltTransformRecord(use JOLT against any record not just JSON)
LookupRecord(to match against Lookup services like caches, Kudu, REST services, ML models, HBase and more)
PartitionRecord(to break up into like groups)
SplitRecord(to break up record groups into records)
UpdateRecord(update values in fields, often paired with LookupRecord)
ValidateRecord(check against a schema and check for extra fields)
ConvertRecord(change between types like JSON to CSV)
When you use PutORC, it will give you the details on building your external table. You can do a PutHiveQL to auto-build this table, but most companies want this done by a DBA.
CREATE EXTERNAL TABLE IF NOT EXISTS `pricesorc` (`item_id` BIGINT, `price` DOUBLE, `created_on` BIGINT, `updated_on` BIGINT) STORED AS ORC LOCATION '/user/tspann/prices'
REST to Database
Let's reverse this now. Sometimes you want to take data, say from a REST service and store it to a JDBC datastore.
InvokeHTTP (read from a REST endpoint)
PutDatabaseRecord (put JSON to our JDBC store).
That's it to store data to a database. We could add some of the ETL/ ELT enrichments mentioned above
or others that manipulate content.
Database Connection Pool
Get the REST Data
From ApacheCon 2020, John Kuchmek does a great talk onIncrementally Streaming RDBMS Data.