Member since
06-26-2015
505
Posts
127
Kudos Received
114
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
649 | 09-20-2022 03:33 PM | |
2076 | 09-19-2022 04:47 PM | |
1225 | 09-11-2022 05:01 PM | |
1310 | 09-06-2022 02:23 PM | |
1890 | 09-06-2022 04:30 AM |
09-12-2023
11:14 PM
1 Kudo
We are pleased to announce the general availability of Cloudera Streaming Analytics (CSA) 1.11 on CDP Private Cloud Base 7.1.9. This release includes improvements to SQL Stream Builder (SSB) as well as updates to Flink 1.16.2. These changes are focused on enhancing the user experience and fixing bugs, making the product more robust and stable. Sincere thanks to all the individuals who helped with this release and did an incredible job to get this ready. Key features for this release Rebase to Apache Flink 1.16.2 - Apache Flink 1.16.2 is now supported in CSA 1.11. Apache Iceberg support - Support for Apache Iceberg tables using Iceberg v2 format has been added to Flink and SSB. For more information, see the Creating Iceberg tables documentation. Links What's New in CSA 1.11 Documentation Iceberg Tables Iceberg Connector REST API v2 Reference BLOG: Building a Stateful Intrusion Detection System with SSB
... View more
Labels:
06-30-2023
02:31 AM
We are excited to announce the general availability of Cloudera Streaming Analytics (CSA) 1.10.0 on CDP Private Cloud Base. This release includes a massive set of improvements to SQL Stream Builder (SSB), including the addition of built-in widgets for data visualization, as well as a rebase to Flink 1.16. Some of the key features of this release are: Rebase to Apache Flink 1.16 - Apache Flink 1.16 is now supported in CSA 1.10. PyFlink Support - The Python API for Flink is now supported in CSA. Customers can now create Flink DataStream applications using Python, besides Java and Scala, to build scalable batch and streaming workloads like real-time data processing pipelines, large-scale exploratory data analysis, Machine Learning (ML) pipelines and ETL processes Built-in Widgets for Data Visualization - Built-in data visualization widgets have been added to the SQL Stream Builder (SSB) UI to provide a quick and simple way to visualize data from streaming jobs and materialized views in real-time. Built-in Support for Confluent Schema Registry - New catalog type in SSB to make it very easy to read and write data from Confluent Cloud clusters using their Schema Registry service. Flexible Schema Handling for Schema Registry catalogs - Cloudera Schema Registry catalog can now handle separate schemas for message key and payload. Useful Links Documentation Release notes NEW BLOG: Building a Stateful Intrusion Detection System with SSB Cloudera Stream Processing (CSP) Community Edition - Try SSB for free!
... View more
Labels:
03-09-2023
04:58 PM
The Cloudera Data in Motion (DiM) team is pleased to announce the general availability Cloudera Streaming Analytics (CSA) 1.9.0 on CDP Private Cloud Base 7.1.7 SP2 and 7.1.8. This release includes a massive set of improvements to SQL Stream Builder (SSB) as well as updates to Flink 1.15.1. These changes are focused on enhancing the user experience and removing objections and blockers in the sales cycle. All the features described below are already available in the Cloudera Stream Processing - Community Edition release, which is the fasted way for you to try them out for free. Links: Documentation Release notes CSP Community Edition Download and Install Blog - A UI That Makes You Want To Stream Blog - SQL Stream Builder Data Transformations Blog - Job Notifications in SQL Stream Builder Key features for this release: Reworked Streaming SQL Console: The User Interface (UI) of SQL Stream Builder (SSB), the Streaming SQL Console has been reworked with new design elements. Software Development Lifecycle (SDLC) support (Tech Preview): Projects are introduced as an organizational element for SQL Stream Builder that allows you to create and collaborate on SQL jobs throughout the SDLC stages with source control. For more information, see the Project structure and development documentation. Confluent Schema Registry support. Confluent Schema Registry can be used as a catalog in SQL Stream Builder and Flink. This unblocks the onboarding of customers that are using Confluent Kafka with Confluent Schema Registry. Improved REST API for SSB. Several new endpoints have been added to the API, making it easier to automate deployments to SSB and to integrate it with other applications. Updated CSP Community Edition. Community edition has been refreshed to include all these features including the revamped UI and SSB Projects and offers the fastest way for you to try out these new features. And, as usual, bug fixes, security patches, performance improvements, etc.
... View more
Labels:
10-09-2022
04:01 PM
@Althotta , I tested this on 1.16.2 and the behaviour you described doesn't happen to me. Would you be able to share you flow and processor/controller services configuration? Cheers, André
... View more
09-28-2022
03:04 AM
1 Kudo
You can get the id of the root process group and import the template there as well. André
... View more
09-28-2022
12:23 AM
1 Kudo
@Kushisabishii , Which version of NiFi are you using? There's an API endpoint for this: POST /process-groups/{id}/templates/upload Cheers, André
... View more
09-28-2022
12:18 AM
Can you share your settings?
... View more
09-21-2022
03:24 PM
Is your dev cluster running the exact same version of NiFi as production, including the NiFi lib folder?
... View more
09-20-2022
03:33 PM
1 Kudo
@progowl , Yes, it is. Check out the docker compose configuration in this article: https://community.cloudera.com/t5/Community-Articles/NiFi-cluster-sandbox-on-Docker/ta-p/346271 Cheers, André
... View more
09-19-2022
04:47 PM
@SAMSAL @ChuckE , I believe parsing the schema for each flowfile that goes through the processor would be too expensive. Because of that, the schema is parsed only once when the processor is scheduled and used for every flowfile. That's why the attribute values cannot be used for this property. Having a schema hashmap<filename, parsed_schema> internally could be an interesting idea so that the processor would parse the schema onTrigger only once for every schema file name and reuse it afterwards. Obviously memory usage could be a problem if you have too many schemas, but I don't think this is likely to happen. This doesn't happen currently, but it would be a nice feature request IMO. Currently, you can either do that with a scripting processing or use RouteOnAttribute to send each message to a ValidateXML processor with the correct schema. Cheers, André
... View more