Member since
01-06-2016
36
Posts
104
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1652 | 03-16-2017 08:21 AM | |
14056 | 03-15-2017 10:05 AM | |
5704 | 03-15-2017 08:18 AM | |
2961 | 09-26-2016 09:22 AM | |
3197 | 08-18-2016 05:00 AM |
06-25-2018
09:48 PM
2 Kudos
OBJECTIVE:
Verify the Beacon SSO setup is configured correctly.
OVERVIEW:
After you enable Knox SSO for beacon service as per the documentation, it is good to verify the Beacon Knox SSO setup is correctly configured before adding the clusters to the Dataplane.
STEPS:
STEP1:
From the ambari configs, get the beacon.sso.knox.providerurl property and execute the following curl command:
Syntax:
curl -iku $knox-username:$knox-password "<beacon.sso.knox.providerurl>/originalUrl=http://<beacon_server>:25986/api/becaon/cluster/list"
Example:
Command:
curl -iku $username:$password "https://hostname.hwx.site:8443/gateway/knoxsso/api/v1/websso?originalUrl=http://hostname.hwx.site:25968/api/beacon/cluster/list"
Output:
HTTP/1.1 307 Temporary Redirect
Date: Thu, 14 Jun 2018 21:26:27 GMT
X-Frame-Options: DENY
Set-Cookie: JSESSIONID=1abzrcp2xl7sm1k4jgtevkwv6x;Path=/gateway/knoxsso;Secure;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Set-Cookie: rememberMe=deleteMe; Path=/gateway/knoxsso; Max-Age=0; Expires=Wed, 13-Jun-2018 21:26:27 GMT
Set-Cookie: hadoop-jwt=eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJhZG1pbjEiLCJpc3MiOiJLTk9YU1NPIiwiZXhwIjoxNTI5MTExNTg3fQ.lB4ZPXjqAQZJSS8vWJ3exXD4HcOCjTS6L4b9uIf6ZWc80eBTVuEv-u4iiEr02V44hxuEwVAeDVcDW1w0DGauW5L9hHfTKf_y87kaPhPKk2yN20aFtbbrA0lzgawxWIFFaj4wMxwyzDyyKlF6NRijamFhH00TWAH1vRITagVQWEc;Path=/;Domain=.hwx.site;HttpOnly
Location: http://hostname.hwx.site:25968/api/beacon/cluster/list
Content-Length: 0
Server: Jetty(9.2.15.v20160210)
Note the hadoop-jwt from the output
Syntax:
curl -ivL -u : --cookie "hadoop-jwt=<hadoop-jwt> " http://<beacon_server>:25986/api/becaon/cluster/list
Command:
curl -ivL -u : --cookie "hadoop-jwt=eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJhZG1pbjEiLCJpc3MiOiJLTk9YU1NPIiwiZXhwIjoxNTI5MTExNTg3fQ.lB4ZPXjqAQZJSS8vWJ3exXD4HcOCjTS6L4b9uIf6ZWc80eBTVuEv-u4iiEr02V44hxuEwVAeDVcDW1w0DGauW5L9hHfTKf_y87kaPhPKk2yN20aFtbbrA0lzgawxWIFFaj4wMxwyzDyyKlF6NRijamFhH00TWAH1vRITagVQWEc" http://hostname.hwx.site:25968/api/beacon/cluster/list
Output:
* About to connect() to hostname.hwx.site port 25968 (#0)
* Trying 172.27.54.132...
* Connected to hostname.hwx.site (172.27.54.132) port 25968 (#0)
* Server auth using Basic with user ''
> GET /api/beacon/cluster/list HTTP/1.1
> Authorization: Basic Og==
> User-Agent: curl/7.29.0
> Host: hostname.hwx.site:25968
> Accept: */*
> Cookie: hadoop-jwt=eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJhZG1pbjEiLCJpc3MiOiJLTk9YU1NPIiwiZXhwIjoxNTI5MTExNTg3fQ.lB4ZPXjqAQZJSS8vWJ3exXD4HcOCjTS6L4b9uIf6ZWc80eBTVuEv-u4iiEr02V44hxuEwVAeDVcDW1w0DGauW5L9hHfTKf_y87kaPhPKk2yN20aFtbbrA0lzgawxWIFFaj4wMxwyzDyyKlF6NRijamFhH00TWAH1vRITagVQWEc
>
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< Expires: Thu, 01-Jan-1970 00:00:00 GMT
Expires: Thu, 01-Jan-1970 00:00:00 GMT
< Set-Cookie: JSESSIONID=xfigpewrhjme16egdblo9iz6p;Path=/
Set-Cookie: JSESSIONID=xfigpewrhjme16egdblo9iz6p;Path=/
< Content-Type: application/json
Content-Type: application/json
< Transfer-Encoding: chunked
Transfer-Encoding: chunked
< Server: Jetty(6.1.26.hwx)
Server: Jetty(6.1.26.hwx)
<
* Connection #0 to host hostname.hwx.site left intact
{"totalResults":0,"results":0,"cluster":[]}
From the response, you can see the list of clusters added and paired. This verifies that Knox SSO setup with Beacon is working !!
... View more
Labels:
06-25-2018
09:05 PM
OBJECTIVE: Delete the cluster added in the Dataplane using utility script provided in the product. OVERVIEW: DPS-1.1 doesn't support the deletion of added clusters from the UI. If the users want to delete the clusters added in the Dataplane , an utility script is provided which can be used for deleting the HDP Clusters added. PREREQUISITES: Install jq : jq is a lightweight and flexible command-line JSON processor. STEPS: By default, "rm_dp_cluster.sh" script will be located at /usr/dp/current/core/bin. Execute the rm_dp_cluster.sh script by providing the following parameters
DP_JWT : Value of dp_jwt cookie from a valid users browser sessionn HADOOP_JWT : Value of hadoop-jwt cookie from a valid users browser session DP_HOST_NAME : Hostname or IP address of the DataPlane server CLUSTER_NAME: Ambari Cluster Name of the cluster to delete DATA_CENTER_NAME : Name of the cluster datacenter to delete For e.g: Format: ./rm_dp_cluster.sh <DP_JWT> <HADOOP_JWT> <DP_HOST_NAME> <CLUSTER_NAME> <DATA_CENTER_NAME> Executing the script: [root@dphost-mramasami-dlm-test-1 bin]# ./rm_dp_cluster.sh "eyJhbGciOiJSUzI1NiJ9.eyJleHAiOjE1Mjk5NjAzMTksInVzZXIiOiJ7XCJpZFwiOjIsXCJ1c2VybmFtZVwiOlwiYWRtaW4xXCIsXCJhdmF0YXJcIjpudWxsLFwicm9sZXNcIjpbXCJTVVBFUkFETUlOXCIsXCJJTkZSQUFETUlOXCJdLFwic2VydmljZXNcIjpbXCJkbG1cIl0sXCJkaXNwbGF5XCI6XCJhZG1pbjFcIixcInBhc3N3b3JkXCI6XCJcIixcImFjdGl2ZVwiOnRydWUsXCJkYk1hbmFnZWRcIjpmYWxzZSxcImdyb3VwTWFuYWdlZFwiOmZhbHNlLFwidXBkYXRlZEF0XCI6MTUyOTYxNTA1MX0ifQ.rM_L4m2vTb6pN3Qlz-pmfWjC83kEc29-u6SDzBrzz_1vhNinUaYTYiyqw3ELKtsJ062BaUmIhAiiv9NDMsaHlDfCmu7QrhfG4ki6YK-idgmWUhcnJS0O0xkq4evS4oYXHlOYV9RAWAzNiD378h-9-1pk8cpqH9FFHdq3KXH9tUfXV0AWYHeDhMlvAl_948-8DfCGeVjg5aBAWXKYO8PseILXB7skF812uaf5SlqCeobHAgZ1lUT7f9ZhN_i4jUXPc-uvoQK5_NYNu3gY8H9W1ECX7BXTzSqiws2etQNYOFBgIUwtbGFcVQOrjPKJi95avEXQxi0sFko_m1sHYh7X0krf25yYcb4AU195U2TqSYJ5pfD7OjXz9XGpalVGARNDa5l5qs_La4odZ9wmqWAgi4jBm2O15a1Faz8qkUTc2IoQ-Sldcfa2POs-mK9a2Elj6fDFCQk250ysWuH-N7gr_JGFhPWwt_4Kq9fCkANTrbZC2cGMVDm6lUY0i1DSwMIu3ZbUOVpu5-5xwR267V5nDaB08jL1MRwpHLaItumbGhO5iJKIVVSfYtoiMUZEbI-LbFAKQJqBLBZdr3hwCKY9Lavd391XLIYawT_wxZgMQYHnjm1luW350jQ34nEVDa-2T28Lvt5sLiKrUjV33NgPVY-1mjtzrSdesng5ybLv7sY" "eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJhZG1pbjEiLCJpc3MiOiJLTk9YU1NPIiwiZXhwIjoxNTI5OTYwMzE5fQ.D-vHyFj4c59qbN2Eop8FLfkZ1wibL20T4vHZGFYRV8ZBwD5x3X4dk-iwvY9i88aMENpaXt9whtOUMusBLMlBUge5TLlx8jXUAZ24BuoF2D6bTWzH1CUrBa16clH2hwvXuYZnx26jfrlLCKm9qdaZF83cD8LU-GwnY3dfEWCLi-gb3JDfdWQHTEluxLF4J-E86fZ2hiKc7F2o5aaRnGJugc-uEPSuNocCYmCYCpoh55sHzjj2VVRHa4-t7-pPYQNQaCX_vjXnvjQW2UYvMZpoyMRsAcaNHTAUm0zyidJ3q7zOLzasjZx4iRYnp1ttYa2F9Cdb8FpEJ6Qh8xNoZCLM0HH9mJC8fCMrd87IOX0Gw6dP9rYe58IRIMy3pvPW3sMnIUV_mtxFLGrL7tV1i8ubpXA3kSPOpKtk-YqshWy48Q2IPabGkI5mdAHukyKpW8IZHtTcYdMKHN9p2W7nCFu03gDorxDF4MkvLVgv3LF5-RL0zaqzgjY0kQ-gwZ--8-qX8QdvdNhr2Vg88DG2GnVRZc1tXFuHZDfHfQ3nCSiyITRw8hGlPN6GidZwRyXgg8-Ku9rSqv9AbUbEUeF9_XHR4fc5G_qsBmdF5KurTYmifXF0PcVFejjl1_10kVpUeTK4J_Qg21FLHWDwd6rwCZWspaXLyUmhS0kzHymrsutSmN0" 172.22.118.100 onpremdlm SFO_DC Output: % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 29166 0 29166 0 0 93490 0 --:--:-- --:--:-- --:--:-- 93781
Found cluster with name onpremdlm in DataCenter SFO_DC having id 2
Do you want to delete this cluster? (yes / no)? yes
Deleting cluster...
true
Now the onpremdlm successfully deleted.
... View more
Labels:
03-20-2017
04:13 PM
3 Kudos
@Sunile Manjee Comments functionality is supported in Phoenix as well. Ex: "SELECT /* this is a comment */ CAST(USNIG_LONG_ID as DECIMAL) FROM <Table_Name> ORDER BY USNIG_LONG_ID DESC limit 3;" Reference: https://phoenix.apache.org/language/#comments
... View more
03-17-2017
12:32 PM
4 Kudos
@Guillaume Roger One of the solution is you can specify it in the config-default.xml. This file should be present in the folder where your workflow.xml are present, and it is automatically parsed for properties.This file has default values for variables that are not defined via the job.properties file or the -D option. You can use the same job.properties file for all invocations of the workflow or use a different properties file for different runs. But the config-default.the xml file is valid for all invocations of the job. Reference: https://issues.apache.org/jira/browse/OOZIE-1673
... View more
03-16-2017
08:21 AM
7 Kudos
@Santhosh B Gowda There is no direct rest api call where we find out "isHAEnabled" for HDFS. But when we enabled HA for HDFS, "dfs.nameservices" parameter will gets set. So we get the value of dfs.nameservices from the configs and if it is empty then HA is not enabled. If it contains value HA is enabled. you can get the configs using the following API. http://<AMBARI-SERVER>/api/v1/clusters/cl1/configurations?type=hdfs-site&tag=TOPOLOGY_RESOLVED
... View more
03-16-2017
06:05 AM
2 Kudos
CTAS has these restrictions:
The target table cannot be a partitioned table. The target table cannot be an external table. The target table cannot be a list bucketing table.
... View more
03-16-2017
05:44 AM
2 Kudos
Thanks @Gnanasekaran G please accept best answer. we can close the thread
... View more
03-16-2017
04:58 AM
1 Kudo
@Gnanasekaran G if this helped, please vote/accept best answer. we can close the thread
... View more
03-16-2017
04:58 AM
@Gnanasekaran G if this helped, please vote/accept best answer. we can close the thread
... View more
03-16-2017
02:33 AM
@zaenal rifai If this helped, please vote/accept best answer.
... View more