1973
Posts
1225
Kudos Received
124
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1774 | 04-03-2024 06:39 AM | |
2758 | 01-12-2024 08:19 AM | |
1530 | 12-07-2023 01:49 PM | |
2291 | 08-02-2023 07:30 AM | |
3125 | 03-29-2023 01:22 PM |
07-29-2020
02:27 PM
Thanks, will think on refining the distinction between kudu and druid. Currently i would not want to include the fact that flink has state as 'storage', but regarding flink SQL, i may actually make another post later to talk about the way to interact with/access different kinds of data. (As someone also noticed, impala is also not here because it is not a store in itself, but works with stored data).
... View more
07-21-2020
09:19 AM
1 Kudo
The easiest way to grab monitoring data is via the NiFi REST API. Also everything in the NiFi UI is done through REST calls which you can call programmatically. Please read the NiFi docs they are linked directly from your running NiFi application or on the web. They are very thorough and have all the information you could want: https://nifi.apache.org/docs/nifi-docs/. If you are not running NiFi 1.11.4, I recommend you please upgrade. This is supported by Cloudera on multiple platforms. NiFi Rest API https://nifi.apache.org/docs/nifi-docs/rest-api/ There's also an awesome Python wrapper for that REST API: https://pypi.org/project/nipyapi/ Also in NiFi flow programming, every time you produce data to Kafka you get metadata back in FlowFile Attributes. You can push those attributes directly to a kafka topic if you want. So after your PublishKafkaRecord_2_0 1.11.4 so for success read the attributes on # of record and other data then AttributesToJson and push to another topic. you may want a mergerecord in there to aggregate a few of those together. If you are interested in Kafka metrics/record counts/monitoring then you must use Cloudera Streams Messaging Manager, it provides a full Web UI, Monitoring Tool, Alerts, REST API and everything you need for monitoring every producer, consumer, broker, cluster, topic, message, offset and Kafka component. The best way to get NiFi stats is to use the NiFi Reporting Tasks, I like the SQL Reporting task. SQL Reporting Tasks are very powerful and use standard SELECT * FROM JVM_METRICS style reporting, see my article: https://www.datainmotion.dev/2020/04/sql-reporting-task-for-cloudera-flow.html Monitoring Articles https://www.datainmotion.dev/2019/04/monitoring-number-of-of-flow-files.html https://www.datainmotion.dev/2019/03/apache-nifi-operations-and-monitoring.html Other Resources https://www.datainmotion.dev/2019/10/migrating-apache-flume-flows-to-apache_9.html https://www.datainmotion.dev/2019/08/using-cloudera-streams-messaging.html https://dev.to/tspannhw/apache-nifi-and-nifi-registry-administration-3c92 https://dev.to/tspannhw/using-nifi-cli-to-restore-nifi-flows-from-backups-18p9 https://nifi.apache.org/docs/nifi-docs/html/toolkit-guide.html https://www.datainmotion.dev/p/links.html https://www.tutorialspoint.com/apache_nifi/apache_nifi_monitoring.htm https://community.cloudera.com/t5/Community-Articles/Building-a-Custom-Apache-NiFi-Operations-Dashboard-Part-1/ta-p/249060 https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-metrics-reporting-nar/1.11.4/org.apache.nifi.metrics.reporting.task.MetricsReportingTask/ https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-scripting-nar/1.11.4/org.apache.nifi.reporting.script.ScriptedReportingTask/index.html
... View more
05-17-2020
08:41 PM
Hi @kettle
As this thread was marked 'Solved' in June of 2016 you would have a better chance of receiving a useful response by starting a new thread. This will also provide you with the opportunity to provide details specific to your use of the PutSQL processor and/or Phoenix that could aid others in providing a more tailored answer to your question.
... View more
05-15-2020
07:15 PM
hello! If I insert a string containing 'or "or, PutSQL to Phoenix will be return the grammatical errors, this should be how to solve?
... View more
05-12-2020
11:47 AM
You should not use Flume. Flume and it's connectors are deprecated. This and any flow can easily move to NiFi. https://dev.to/tspannhw/migrating-apache-flume-flows-to-apache-nifi-jms-to-x-and-x-to-jms-1g02
... View more
05-12-2020
07:44 AM
Use query record processor, have CSVReader and JSONWriter output. do SELECT satellite_name FROM FLOWFILE Next processor can grab an attribute
... View more
05-08-2020
12:03 PM
Awesome. Good luck with NiFi.
... View more
05-04-2020
10:53 AM
ceph is not supported and i don't know if they really follow the s3 interface. try latest nifi 1.11.4 and use basic s3 mode
... View more
04-27-2020
05:22 PM
1 Kudo
In the use case solved for this webinar, I am a Streaming Engineer at an airline, CloudAir. I need to find, filter and clean Twitter streams then perform sentiment analysis.
Score Models in the Stream to Act
As the Streaming Engineer at CloudAIR I am responsible for ingesting data from thousands of sources, operationalizing machine learning models as part of our streams, running real-time ELT/ETL processes and building event processing systems running from devices, servers and edge nodes. For today’s use case, one of our Machine Learning engineers had given me a model that was deployed into one of our production Cloudera Machine Learning (CML) environments. I logged into Cloudera Data Platform (CDP), found the model, tested it, and then extracted the information I need to add this model to our streaming ingest flow for the social media team.
I have been given permissions to access the airline-sentiment workshop in CDP Public Cloud.
I can see all the models deployed in the project I have access to. I see that predict-sentiment is the one I am to use. It is deployed and has 8GB of RAM and 2 vCPU.
I can see that it has been running successfully for a while and I can test it right from the project.
You can see the URL after the POST and the accessKey is in JSON.
Ingesting & Pre-processing Data from Twitter
Using Cloudera Flow Management (CFM) I am ingesting real-time Twitter streams which I filter for only airline specific data. I then clean and transform these records in a few simple steps. The next pieces I will need are those two critical values from CML: the Access Key and URL for the model. I will add them to an instance of an ExecuteClouderaML processor.
I am also sending the raw tweet (large JSON files) to a Kafka topic for further processing by other teams.
I also need to store this data to tables for ad-hoc queries. So I quickly spin up a virtual warehouse with Impala for reporting uses. I will put my data into S3 buckets as Parquet files, with an external Impala table on top, for these reports.
Defining the Impala Table for Supporting Queries
Once my environment is ready, which will only take a few minutes, I will launch Hue to create a table.
From the virtual warehouse I can grab the JDBC URL that I will need to add to my Impala Connection pool in CFM for connecting to the warehouse. I will also need the JDBC driver.
From CFM I add a JDBC Controller and copy in the URL, the Impala driver name and a link to that JDBC jar. I will also set my user and password, or Kerberos credentials, to connect.
Using the Scores
After having called CML from CFM, I can see the scoring results and now use them to augment my twitter data. The data is added to the attributes for each event and does not affect the current flowfile data.
Now that data is streaming into Impala I can run ad-hoc queries and build charts on my sentiment-enriched, cleaned-up twitter data.
For those of you that love the command line, we can grab a link to the Impala command line tool for the virtual warehouse as well, and query from there. Good for quick checks.
Storing the Twitter Data in a Kudu Table
In another section of our flow we are also storing our enriched tweets in a CDP Data Center (CDP-DC) Kudu table for additional analytics that we are running in Hue and in a Jupyter notebook
that we spin up with our CDP-DC CML.
Jupyter notebooks spun up from Cloudera Machine Learning let me explore my data and do some charting, graphs and SQL work in Python3.
Assuring Data Governance and Lineage
One of the amazing features that comes in handy when you have a complex flow that spans a hybrid environment is to have data management and governance abilities. We can do that with Apache Atlas.
We can navigate and search through Atlas to see how data travels through Apache NiFi, Apache Kafka, tables and Cloudera Machine Learning model activities like deployment.
Final DataFlow For Scoring
We have a Query Record processor in CFM that analyzes the streaming events and looks for Negative sentiment by influencers, we then push those events to a Slack channel for our social media team to handle.
As we have seen, we are sending several different streams of data to Kafka topics for further processing with Spark Streaming, Flink, NiFi, Java and Kafka Streams applications. Using Cloudera Streams Messaging Manager we can see all the components of our Kafka cluster and where our events are as they travel through topics in various brokers. You can see messages in all of the partitions, you can also build up alerts for any part of your Kafka system. An important piece is you can trace messages from all of the consumers to all of the producers and see any lag or latency that occurs in clients.
We can also push to our Operational Database (HBase) and easily scan through the rapidly inserted rows.
This demo was presented in the webinar, Harnessing the Data Lifecycle for Customer Experience Optimization.
Source Code Resources
Queries, Python, Models, Notebooks
Example Cloudera Machine Learning Connector
SQL
... View more
04-16-2020
01:31 PM
How to configure merge record inorder to merge the multiple flowfile having content as a|1 and the other flow files like b|2 ..etc I need to merge them and get it as a|1 b|2 where as i get a|1b|2
... View more