Member since
06-26-2015
513
Posts
137
Kudos Received
114
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1566 | 09-20-2022 03:33 PM | |
4437 | 09-19-2022 04:47 PM | |
2538 | 09-11-2022 05:01 PM | |
2699 | 09-06-2022 02:23 PM | |
4174 | 09-06-2022 04:30 AM |
03-09-2023
04:58 PM
The Cloudera Data in Motion (DiM) team is pleased to announce the general availability Cloudera Streaming Analytics (CSA) 1.9.0 on CDP Private Cloud Base 7.1.7 SP2 and 7.1.8. This release includes a massive set of improvements to SQL Stream Builder (SSB) as well as updates to Flink 1.15.1. These changes are focused on enhancing the user experience and removing objections and blockers in the sales cycle. All the features described below are already available in the Cloudera Stream Processing - Community Edition release, which is the fasted way for you to try them out for free. Links: Documentation Release notes CSP Community Edition Download and Install Blog - A UI That Makes You Want To Stream Blog - SQL Stream Builder Data Transformations Blog - Job Notifications in SQL Stream Builder Key features for this release: Reworked Streaming SQL Console: The User Interface (UI) of SQL Stream Builder (SSB), the Streaming SQL Console has been reworked with new design elements. Software Development Lifecycle (SDLC) support (Tech Preview): Projects are introduced as an organizational element for SQL Stream Builder that allows you to create and collaborate on SQL jobs throughout the SDLC stages with source control. For more information, see the Project structure and development documentation. Confluent Schema Registry support. Confluent Schema Registry can be used as a catalog in SQL Stream Builder and Flink. This unblocks the onboarding of customers that are using Confluent Kafka with Confluent Schema Registry. Improved REST API for SSB. Several new endpoints have been added to the API, making it easier to automate deployments to SSB and to integrate it with other applications. Updated CSP Community Edition. Community edition has been refreshed to include all these features including the revamped UI and SSB Projects and offers the fastest way for you to try out these new features. And, as usual, bug fixes, security patches, performance improvements, etc.
... View more
Labels:
10-09-2022
04:01 PM
@Althotta , I tested this on 1.16.2 and the behaviour you described doesn't happen to me. Would you be able to share you flow and processor/controller services configuration? Cheers, André
... View more
09-28-2022
03:04 AM
1 Kudo
You can get the id of the root process group and import the template there as well. André
... View more
09-28-2022
12:23 AM
1 Kudo
@Kushisabishii , Which version of NiFi are you using? There's an API endpoint for this: POST /process-groups/{id}/templates/upload Cheers, André
... View more
09-28-2022
12:18 AM
Can you share your settings?
... View more
09-21-2022
03:24 PM
Is your dev cluster running the exact same version of NiFi as production, including the NiFi lib folder?
... View more
09-20-2022
03:33 PM
1 Kudo
@progowl , Yes, it is. Check out the docker compose configuration in this article: https://community.cloudera.com/t5/Community-Articles/NiFi-cluster-sandbox-on-Docker/ta-p/346271 Cheers, André
... View more
09-19-2022
04:47 PM
@SAMSAL @ChuckE , I believe parsing the schema for each flowfile that goes through the processor would be too expensive. Because of that, the schema is parsed only once when the processor is scheduled and used for every flowfile. That's why the attribute values cannot be used for this property. Having a schema hashmap<filename, parsed_schema> internally could be an interesting idea so that the processor would parse the schema onTrigger only once for every schema file name and reuse it afterwards. Obviously memory usage could be a problem if you have too many schemas, but I don't think this is likely to happen. This doesn't happen currently, but it would be a nice feature request IMO. Currently, you can either do that with a scripting processing or use RouteOnAttribute to send each message to a ValidateXML processor with the correct schema. Cheers, André
... View more
09-11-2022
05:01 PM
1 Kudo
@sekhar1 , The CDP user that you're using to execute your job needs an "IDBroker mapping" to a valid AWS role to be able to access the contents of the S3 bucket. Please check this: https://docs.cloudera.com/cdf-datahub/7.2.10/nifi-hive-ingest/topics/cdf-datahub-hive-ingest-idbroker-mapping.html Cheers, André
... View more
09-07-2022
05:04 AM
1 Kudo
Everything you do in the NiFi UI can also be done using the NiFi REST API. So if you want/need to automate it, it's totally possible and not difficult. Cheers, André
... View more