Member since
03-16-2016
707
Posts
1753
Kudos Received
203
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4997 | 09-21-2018 09:54 PM | |
6329 | 03-31-2018 03:59 AM | |
1932 | 03-31-2018 03:55 AM | |
2141 | 03-31-2018 03:31 AM | |
4731 | 03-27-2018 03:46 PM |
04-09-2024
03:06 AM
1 Kudo
GetHTTP itself doesn't handle OAuth2 directly. Here's a breakdown of the process:
1. Obtaining Access Token:
You'll need to acquire an access token before making API calls to Salesforce.
This typically involves a two-step process:
Step 1: Authorization Code Grant:
Direct your user to a Salesforce authorization URL with your client ID and redirect URI.
Upon successful login and authorization, Salesforce redirects the user back to your redirect URI with an authorization code.
Step 2: Token Request:
Use the authorization code retrieved in step 1 to make a POST request to Salesforce's token endpoint.
Include your client ID, client secret, redirect URI, and grant type ("authorization_code") in the request body.
If successful, Salesforce will respond with an access token and other relevant information (refresh token, expiration time).
2. Using Access Token with GetHTTP:
Once you have the access token, you can use GetHTTP to make API calls to Salesforce.
Set the following headers in your GetHTTP request:
Authorization: Bearer <access_token> (Replace <access_token> with your actual token)
Configure the request URL with the desired Salesforce API endpoint and any necessary parameters.
Execute the GetHTTP request to retrieve data or perform actions on the Salesforce platform.
Important Considerations:
Security: Store access tokens securely and avoid exposing them in code or logs.
Token Refresh: Access tokens expire, so implement a mechanism to refresh them before expiration using the refresh token obtained during the initial authorization flow.
Libraries: Consider using libraries designed for Salesforce integrations, which can simplify the OAuth2 process and provide additional functionalities.
... View more
05-09-2022
06:21 AM
Similar to this, I have a use case to compare Ansible Code with the Ambari Configs. The reason we are doing this is that we found several inconsistencies w.r.t to Ansible code and Ambari configs. But comparing both is a big task as there are many playbooks where we have Hadoop code so checking all the code base a heck. Any other option to do the comparison.....
... View more
12-21-2021
08:26 AM
Impala doesnt support the ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde' even in newer version like v3.4.0. Any other option to remove double quotes in the output from Impala where the input csv file has quotes?
... View more
05-30-2021
01:17 AM
[hdfs@c****-node* hive-testbench-hive14]$ ./tpcds-build.sh Building TPC-DS Data Generator make: Nothing to be done for `all’. TPC-DS Data Generator built, you can now use tpcds-setup.sh to generate data. [hdfs@c4237-node2 hive-testbench-hive14]$ ./tpcds-setup.sh 2 TPC-DS text data generation complete. Loading text data into external tables. make: *** [time_dim] Error 1 make: *** Waiting for unfinished jobs.... make: *** [date_dim] Error 1 Data loaded into database tpcds_bin_partitioned_orc_2. INFO : OK +---------------------+ | database_name | +---------------------+ | default | | information_schema | | sys | +---------------------+ 3 rows selected (1.955 seconds) 0: jdbc:hive2://c4237-node2.coelab.cloudera.c> tpcds_bin_partitioned_orc_2 database is not created, I have some issues in testing the tpcds queries sudo -u hdfs -s 13 cd /home/hdfs 14 wget https://github.com/hortonworks/hive-testbench/archive/hive14.zip 15 unzip hive14.zip 17 export JAVA_HOME=/usr/jdk64/jdk1.8.0_77 18 export PATH=$JAVA_HOME/bin:$PATH ./tpcds-build.sh beeline -i testbench.settings -u "jdbc:hive2://c****-node9.coe***.*****.com:10500/tpcds_bin_partitioned_orc_2" I'm not able to test the tpcds queries, any help would be appreciated.
... View more
01-26-2021
08:51 AM
https://nifi.apache.org/docs/nifi-docs/components/nifi-docs/components/org.apache.nifi/nifi-gcp-nar/1.9.0/org.apache.nifi.processors.gcp.bigquery.PutBigQueryBatch/index.html With this processor, you can do Batch loads flow files content to a Google BigQuery table.
... View more
11-11-2020
01:20 AM
You can try this ${message:unescapeXml()} This function unescapes a string containing XML entity escapes to a string containing the actual Unicode characters corresponding to the escapes.
... View more
07-22-2020
05:45 AM
The given solution is certainly not true unfortunately. In HDFS a given block if it is open for write, then consumes 128MB that is true, but as soon as the file is closed, the last block of the file is counted just by the length of the file. So if you have a 1KB file, that consumes 3KB disk space considering replication factor 3, and if you have a 129MB file that consumes 387MB disk space again with replication factor 3. The phenomenon that can be seen in the output was most likely caused by other non-DFS disk usage, that made the available disk space for HDFS less, and had nothing to do with the file sizes. Just to demonstrate this with a 1KB test file: # hdfs dfs -df -h Filesystem Size Used Available Use% hdfs://<nn>:8020 27.1 T 120 K 27.1 T 0% # fallocate -l 1024 test.txt # hdfs dfs -put test.txt /tmp # hdfs dfs -df -h Filesystem Size Used Available Use% hdfs://<nn>:8020 27.1 T 123.0 K 27.1 T 0% I hope this helps to clarify and correct this answer.
... View more
07-13-2020
12:54 AM
1 Kudo
Hello @VidyaSargur, thanks for your answer. You are totally right. I only realized that this is an older thread after I had already posted. Therefore I already created a new thread (https://community.cloudera.com/t5/Support-Questions/Permanently-store-sqoop-map-column-hive-mapping-for-DB2/td-p/299556). regards
... View more
06-26-2020
08:27 AM
@Kapardjh, As this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question.
... View more
05-21-2020
10:59 PM
@GreenMonster, the thread you've posted your question to was marked 'Solved' quite a while ago and hasn't had any activity recently. While we welcome your question and a member of the Cloudera Community might well be able to answer it, I believe you would be much more likely to obtain a timely solution if you posted it to the Ubuntu Community.
... View more