Member since
04-11-2016
469
Posts
320
Kudos Received
118
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1681 | 03-09-2018 05:31 PM | |
2018 | 03-07-2018 09:45 AM | |
2015 | 03-07-2018 09:31 AM | |
3503 | 03-03-2018 01:37 PM | |
1980 | 10-17-2017 02:15 PM |
04-09-2024
03:06 AM
1 Kudo
GetHTTP itself doesn't handle OAuth2 directly. Here's a breakdown of the process:
1. Obtaining Access Token:
You'll need to acquire an access token before making API calls to Salesforce.
This typically involves a two-step process:
Step 1: Authorization Code Grant:
Direct your user to a Salesforce authorization URL with your client ID and redirect URI.
Upon successful login and authorization, Salesforce redirects the user back to your redirect URI with an authorization code.
Step 2: Token Request:
Use the authorization code retrieved in step 1 to make a POST request to Salesforce's token endpoint.
Include your client ID, client secret, redirect URI, and grant type ("authorization_code") in the request body.
If successful, Salesforce will respond with an access token and other relevant information (refresh token, expiration time).
2. Using Access Token with GetHTTP:
Once you have the access token, you can use GetHTTP to make API calls to Salesforce.
Set the following headers in your GetHTTP request:
Authorization: Bearer <access_token> (Replace <access_token> with your actual token)
Configure the request URL with the desired Salesforce API endpoint and any necessary parameters.
Execute the GetHTTP request to retrieve data or perform actions on the Salesforce platform.
Important Considerations:
Security: Store access tokens securely and avoid exposing them in code or logs.
Token Refresh: Access tokens expire, so implement a mechanism to refresh them before expiration using the refresh token obtained during the initial authorization flow.
Libraries: Consider using libraries designed for Salesforce integrations, which can simplify the OAuth2 process and provide additional functionalities.
... View more
03-05-2024
07:13 PM
1 Kudo
@SAMSAL are you still on M1 or M2...I'm on M1 and took what you shared and just made those minor tweaks. I'll test it out more tomorrow...maybe I'll restart NiFi just to clear memory and any funny stuff that might there.
... View more
05-25-2023
12:59 AM
everybody is entitled to an opinion but may I ask why are you saying this? 🙂 As you are on a NiFi post, I assume that you are referring to the Cloudera NiFi documentation? I find it very helpful, especially combined with the NiFi's original documentation. It even has some additional thins compared to the original documentation. No matter the feedback, positive or negative, it is good when you know what to do with it. In your case, if you would provide a better and more structured feedback, maybe somebody from Cloudera would understand your point of view and he/she could modify the documentation 🙂
... View more
04-27-2023
10:03 AM
Also I had one with a Kudu cache for calling Daily Med https://github.com/tspannhw/ApacheConAtHome2020/tree/main/flows/DailyMed https://www.datainmotion.dev/2021/01/flank-using-apache-kudu-as-cache-for.html
... View more
04-07-2023
04:15 PM
@hassenseoud Triggering log roll on remote NameNode hdpmaste r2/192.168.1.162:8020 2016-10-24 11:25:52,108 WARN ha.EditLogTailer (EditLogTailer.java:triggerActiveLogRoll(276)) - Unable to trigger a roll of the active NN org.apache.hadoop.ipc.RemoteException (org.apache.hadoop.ipc.StandbyException): Operation category JOURNAL is not supported in state standby Check the active NN $ hdfs haadmin -getServiceState <serviceId> : dfs.ha.namenodes.mycluster in hdfs-site.xm $ hdfs haadmin -getServiceState namenode2 active Example $ hdfs haadmin -getServiceState namenode1 standby Action 2 Shutdown whichever of the above was/is the standby from Ambari ensure its stopped. Action 3 From Ambari do a rolling restart of the zk forum, wait untill all 3 or x are restarted Action 4 Execute sequentially the below cmd's $ hdfs dfsadmin -safemode enter $ hdfs dfsadmin -saveNamespace $ hdfs dfsadmin -safemode leave Restart the JN and the above NN service when all is green then you can safely start the standby
... View more
01-12-2023
09:49 AM
Apache Flink Upgrade Deployments can now upgrade from Flink 1.14 to 1.15.1. This update includes 40 bug fixes and a number of other enhancements. To learn more about what has been fixed check out the release notes. SQL Stream Builder UI The Streaming SQL Console (UI) of SQL Stream Builder has been completely reworked with new design elements. The new design provides improved user access to artifacts that are commonly used or already created as part of a project simplifying navigation and saving the user time. Software Development Lifecycle support with Projects Projects for SQL Stream Builder that improves upon the Software Developer Life Cycle needs of developers and analysts writing applications, allowing them to group together related artifacts and sync them to GitHub for versioning and CI/CD management. Before today, when users would create SQL jobs, functions, and other artifacts there was no effective way to migrate them to another environment for use (ie: from dev to prod.) The typical way that artifacts were migrated was through copy and pasting the code between environments; this lived outside of code repositories, the typical CI/CD process many companies utilize, took additional hands on keyboard time, allowed potential errors to be introduced during the copying process and when updating specific environmental configurations. These issues are solved with SQL Stream Builder Projects. Users just simply create a new project giving it a name and link a GitHub repository to it as part of creation, from this point onward any artifacts created in the project can be pushed to the GitHub Repository with a click of a button. For environment specific needs a parameterized key value configuration can be used to prevent having to edit configurations that change between deployments by referencing generic properties that are set differently between environments. Job Notifications Job notifications can help make sure that you can detect failed jobs without checking on the UI, which can save a lot of time for the user. This feature is very useful, especially when the user has numerous jobs running and keeping track of their state would be hard without notifications. Notifications can be made to send to a single user or a group of users both over email or by using a webhook. Summary In this post, we looked at some of the new features that came out in CDP Public Cloud 7.2.16. This includes Flink 1.15.1 which comes with many bug fixes, a brand new UI for SQL Stream Builder, the ability to monitor jobs for failures and send notifications and new Software Development Lifecycle capabilities with Projects. For more details read the latest release notes. Give Cloudera Streaming Analytics 7.2.16 for Datahub a try today and check out all the greatest new features added!
... View more
12-14-2022
08:22 AM
1 Kudo
You can find the release notes and the download links in the documentation. Key features for this release Rebase against NiFi 1.18 bringing the latest and greatest of Apache NiFi. It contains a ton of improvements and new features. Reset of the end of life policy: CFM 2.1.5 will be supported until August 2025 to match the CDP 7.1.7 LTS policy. It is particularly important as HDF and CFM 1.x are near to end of life. Parameter Providers: we are introducing the concept of Parameter Providers allowing users to fetch the values of parameters from external locations. In addition to a better separation of duties, it is also very useful to make CI/CD better and easier. With this release, we're supporting the following Parameter Providers: AWS Secret Manager GCP Secret Manager HashiCorp Vault Database Environment Variables External file Registry Client to connect to a DataFlow Catalog. The registry endpoint is now an extension in NiFi. It means that it is no longer limited to accessing a NiFi Registry instance. With this release we're adding an implementation allowing users to connect NiFi to their DataFlow Catalog and use it just like they would with NiFi Registry. For hybrid customers, it means that they can easily checkout and version flow definitions in the same place for both on-prem and cloud usage. It also means that on-prem customers can access the ReadyFlows gallery, assuming they have a public cloud tenant. Iceberg processor (Tech Preview). We're making available a PutIceberg processor in Technical Preview allowing users to push data into Iceberg using NiFi. This can be used in both batch and streaming (micro-batch) fashion. Snowflake ingest with Snowpipe (Tech Preview). Until now, only JDBC could be used to push data into Snowflake with NiFi. We're now making available a set of processors leveraging Snowpipe to push data into Snowflake in a more efficient way. New components: we are adding a bunch of new components... ConsumeTwitter Processors to interact with Box, Dropbox, Google Drive, SMB Processors to interact with HubSpot, Shopify, Zendesk, Workday, Airtable PutBigQuery (leveraging the new API) ListenBeats is now Cloudera supported UpdateDatabaseTable to manage updates on table's schema (add columns for example) AzureEventHubRecordSink & UDPEventRecordSink CiscoEmblemSyslogMessageReader to make it easy to ingest logs from Cisco systems such as ASA VPNs ConfluentSchemaRegistry is now Cloudera supported Iceberg and Snowflake components as mentioned before Replay last event: with this release we add the possibility to replay the last event at processor level (right-click on the processor, replay last event). This is making it super easy to replay the last flow file (instead of going to the provenance events, take the last event and click replay). This is something very useful when developing flows! And, as usual, bug fixes, security patches, performance improvements, etc.
... View more
Labels:
06-05-2022
06:38 PM
Excellent solution, works perfectly and the best part you can copy and paste to any flow. Thanks! @ozw1z5rd1
... View more
09-16-2021
02:40 AM
1 Kudo
With the release of CDP 7.2.11, it is now super easy to deploy your custom components on your Flow Management DataHub clusters by dropping your components in a bucket of your cloud provider. Until now, when building custom components in NiFi, you had to SSH to all of your NiFi nodes to deploy your components and make them available to use in your flow definitions. This was adding an operational overhead and was also causing issues when scaling up clusters. From now on, it's easy to configure your NiFi clusters to automatically fetch custom components from an external location in the object store of the cloud provider where NiFi is running. All of your nodes will fetch the components after you dropped them in the configured location. You can find more information in the documentation.
... View more
09-16-2021
02:31 AM
1 Kudo
With the release of CDP 7.2.11, you now have the possibility to scale up and down both your light duty and heavy duty Flow Management clusters on all cloud providers. You can find more information in the documentation.
... View more