1973
Posts
1225
Kudos Received
124
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
775 | 04-03-2024 06:39 AM | |
1425 | 01-12-2024 08:19 AM | |
771 | 12-07-2023 01:49 PM | |
1327 | 08-02-2023 07:30 AM | |
1922 | 03-29-2023 01:22 PM |
04-12-2021
04:37 PM
Put both tables in Kafka topics and have SQL Stream Builder joing them with a simple SQL Join. or https://community.cloudera.com/t5/Support-Questions/Nifi-how-to-sql-join-two-flowfiles/td-p/298227 http://apache-nifi-users-list.2361937.n4.nabble.com/Joining-two-or-more-flow-files-and-merging-the-content-td10543.html https://medium.com/@surajnagendra/merge-csv-files-apache-nifi-21ba44e1b719
... View more
04-12-2021
04:34 PM
use Stateless NiFi https://medium.com/@tspann_38871/exploring-apache-nifi-1-10-parameters-and-stateless-engine-b0815e924938
... View more
04-01-2021
03:51 PM
Move all events to Kafka
... View more
03-18-2021
02:06 PM
NiFi versions are tied to Hive versions so you need to compatible one. Check with your Cloudera team to ge tthe correct version. Using PutHive3Streaming will be faster. So is just PutOrc, PutParquet or PutHDFS.
... View more
03-18-2021
12:23 PM
Ok that's pretty old. PutHiveQL is not the best option, but for now. see: https://issues.apache.org/jira/browse/NIFI-4684 PutHiveStreaming or PutHDFS or PutORC is better and create an external table or PutDatabaseRecord for JDBC I would highly recommend updating to HDF 3.5.2 and HDP 3.1 or CDP as these versions are going to be out of support soon.
... View more
03-18-2021
08:18 AM
https://docs.cloudera.com/cdf-datahub/7.2.7/nifi-hive-ingest/topics/cdf-datahub-nifi-hive-ingest.html PutHive3Streaming is faster and better. https://docs.cloudera.com/cdf-datahub/7.2.7/nifi-hive-ingest/topics/cdf-datahub-hive-ingest-data-target.html What version of Hive? Is this CDH? HDP? You can also do PutORC or convert to ORC and push to HDFS Or push to HDFS as Parquet https://www.datainmotion.dev/2019/10/migrating-apache-flume-flows-to-apache.html Use Record processors, they are easier and MUCH faster. You won't need a split then. https://www.datainmotion.dev/2020/12/simple-change-data-capture-cdc-with-sql.html I recommend using CFM NiFi version 1.11.4 or newer.
... View more
03-15-2021
04:48 PM
NiFi for XML / RSS / REST Feed Ingest I want to retrieve the status from various Cloud providers and services, including Cloudera, AWS, Azure, and Google. I have found many of the available status APIs will return XML/RSS. We love that format for Apache NiFi, so let's do that. Note: If you are doing development in a non-production environment, try the new NiFi 1.13.1. If you need to run your flows in production on-premise, private cloud, or in the public cloud, then use Cloudera Flow Management. I have separated the processing module "Status" from the input, so I can pass in the input anyway I want. When I move this to a K8 environment, this will become a parameter that I pass in. Stay tuned to Cloudera releases. The flow is pretty simple to process RSS status data. We call the status URL and in the next step easily convert RSS into JSON for easier processing. I split these records and grab just the fields I like. I can easily add additional fields from my metadata for unique id, timestamp, company name, and service name. PutKudu will store my JSON records as Kudu fields at high speed. If something goes wrong, we will try again. Sometimes, the internet is down! But, without this app, how will we know??? We can run a QueryRecord processor to query live fields from the status messages and I will send Spark-related ones to my Slack channel. I can add as many ANSI SQL92 Calcite queries as I wish. It's easy. We were easily able to insert all the status messages to our 'cloudstatus' table. Now, we can query it and use it in reports, dashboards, and visual applications. I don't want to have to go to external sites to get the status alerts, so I will post key ones to a Slack channel. I want to store my status reads in a table for fast analytics and permanent storage. So, I will store it in a Kudu table with Impala on top for fast queries. CREATE TABLE cloudstatus ( `uuid` STRING, `ts` TIMESTAMP, `companyname` STRING, `servicename` STRING, `title` STRING, `description` STRING, `pubdate` STRING, `link` STRING, `guid` STRING, PRIMARY KEY (`uuid`,`ts` ) ) PARTITION BY HASH PARTITIONS 4 STORED AS KUDU TBLPROPERTIES ('kudu.num_tablet_replicas' = '1'); My source code is available here. In the next step, I can write some real-time dashboard with Cloudera Visual Apps, add fast queries on Kafka with Flink SQL, or write some machine learning in Cloudera Machine Learning to finish the application. Join my next live video broadcast to suggest what we do with this data next. Thanks for reading!
... View more
Labels:
03-15-2021
02:24 PM
1 Kudo
https://dev.to/tspannhw/ingesting-all-the-weather-data-with-apache-nifi-2ho4
... View more
03-15-2021
02:20 PM
1 Kudo
I would create a schema then PutDatabaseRecord do you have an example of. the output data?
... View more