Member since
01-06-2016
36
Posts
104
Kudos Received
6
Solutions
06-25-2018
09:48 PM
2 Kudos
OBJECTIVE:
Verify the Beacon SSO setup is configured correctly.
OVERVIEW:
After you enable Knox SSO for beacon service as per the documentation, it is good to verify the Beacon Knox SSO setup is correctly configured before adding the clusters to the Dataplane.
STEPS:
STEP1:
From the ambari configs, get the beacon.sso.knox.providerurl property and execute the following curl command:
Syntax:
curl -iku $knox-username:$knox-password "<beacon.sso.knox.providerurl>/originalUrl=http://<beacon_server>:25986/api/becaon/cluster/list"
Example:
Command:
curl -iku $username:$password "https://hostname.hwx.site:8443/gateway/knoxsso/api/v1/websso?originalUrl=http://hostname.hwx.site:25968/api/beacon/cluster/list"
Output:
HTTP/1.1 307 Temporary Redirect
Date: Thu, 14 Jun 2018 21:26:27 GMT
X-Frame-Options: DENY
Set-Cookie: JSESSIONID=1abzrcp2xl7sm1k4jgtevkwv6x;Path=/gateway/knoxsso;Secure;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Set-Cookie: rememberMe=deleteMe; Path=/gateway/knoxsso; Max-Age=0; Expires=Wed, 13-Jun-2018 21:26:27 GMT
Set-Cookie: hadoop-jwt=eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJhZG1pbjEiLCJpc3MiOiJLTk9YU1NPIiwiZXhwIjoxNTI5MTExNTg3fQ.lB4ZPXjqAQZJSS8vWJ3exXD4HcOCjTS6L4b9uIf6ZWc80eBTVuEv-u4iiEr02V44hxuEwVAeDVcDW1w0DGauW5L9hHfTKf_y87kaPhPKk2yN20aFtbbrA0lzgawxWIFFaj4wMxwyzDyyKlF6NRijamFhH00TWAH1vRITagVQWEc;Path=/;Domain=.hwx.site;HttpOnly
Location: http://hostname.hwx.site:25968/api/beacon/cluster/list
Content-Length: 0
Server: Jetty(9.2.15.v20160210)
Note the hadoop-jwt from the output
Syntax:
curl -ivL -u : --cookie "hadoop-jwt=<hadoop-jwt> " http://<beacon_server>:25986/api/becaon/cluster/list
Command:
curl -ivL -u : --cookie "hadoop-jwt=eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJhZG1pbjEiLCJpc3MiOiJLTk9YU1NPIiwiZXhwIjoxNTI5MTExNTg3fQ.lB4ZPXjqAQZJSS8vWJ3exXD4HcOCjTS6L4b9uIf6ZWc80eBTVuEv-u4iiEr02V44hxuEwVAeDVcDW1w0DGauW5L9hHfTKf_y87kaPhPKk2yN20aFtbbrA0lzgawxWIFFaj4wMxwyzDyyKlF6NRijamFhH00TWAH1vRITagVQWEc" http://hostname.hwx.site:25968/api/beacon/cluster/list
Output:
* About to connect() to hostname.hwx.site port 25968 (#0)
* Trying 172.27.54.132...
* Connected to hostname.hwx.site (172.27.54.132) port 25968 (#0)
* Server auth using Basic with user ''
> GET /api/beacon/cluster/list HTTP/1.1
> Authorization: Basic Og==
> User-Agent: curl/7.29.0
> Host: hostname.hwx.site:25968
> Accept: */*
> Cookie: hadoop-jwt=eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJhZG1pbjEiLCJpc3MiOiJLTk9YU1NPIiwiZXhwIjoxNTI5MTExNTg3fQ.lB4ZPXjqAQZJSS8vWJ3exXD4HcOCjTS6L4b9uIf6ZWc80eBTVuEv-u4iiEr02V44hxuEwVAeDVcDW1w0DGauW5L9hHfTKf_y87kaPhPKk2yN20aFtbbrA0lzgawxWIFFaj4wMxwyzDyyKlF6NRijamFhH00TWAH1vRITagVQWEc
>
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< Expires: Thu, 01-Jan-1970 00:00:00 GMT
Expires: Thu, 01-Jan-1970 00:00:00 GMT
< Set-Cookie: JSESSIONID=xfigpewrhjme16egdblo9iz6p;Path=/
Set-Cookie: JSESSIONID=xfigpewrhjme16egdblo9iz6p;Path=/
< Content-Type: application/json
Content-Type: application/json
< Transfer-Encoding: chunked
Transfer-Encoding: chunked
< Server: Jetty(6.1.26.hwx)
Server: Jetty(6.1.26.hwx)
<
* Connection #0 to host hostname.hwx.site left intact
{"totalResults":0,"results":0,"cluster":[]}
From the response, you can see the list of clusters added and paired. This verifies that Knox SSO setup with Beacon is working !!
... View more
Labels:
06-25-2018
09:05 PM
OBJECTIVE: Delete the cluster added in the Dataplane using utility script provided in the product. OVERVIEW: DPS-1.1 doesn't support the deletion of added clusters from the UI. If the users want to delete the clusters added in the Dataplane , an utility script is provided which can be used for deleting the HDP Clusters added. PREREQUISITES: Install jq : jq is a lightweight and flexible command-line JSON processor. STEPS: By default, "rm_dp_cluster.sh" script will be located at /usr/dp/current/core/bin. Execute the rm_dp_cluster.sh script by providing the following parameters
DP_JWT : Value of dp_jwt cookie from a valid users browser sessionn HADOOP_JWT : Value of hadoop-jwt cookie from a valid users browser session DP_HOST_NAME : Hostname or IP address of the DataPlane server CLUSTER_NAME: Ambari Cluster Name of the cluster to delete DATA_CENTER_NAME : Name of the cluster datacenter to delete For e.g: Format: ./rm_dp_cluster.sh <DP_JWT> <HADOOP_JWT> <DP_HOST_NAME> <CLUSTER_NAME> <DATA_CENTER_NAME> Executing the script: [root@dphost-mramasami-dlm-test-1 bin]# ./rm_dp_cluster.sh "eyJhbGciOiJSUzI1NiJ9.eyJleHAiOjE1Mjk5NjAzMTksInVzZXIiOiJ7XCJpZFwiOjIsXCJ1c2VybmFtZVwiOlwiYWRtaW4xXCIsXCJhdmF0YXJcIjpudWxsLFwicm9sZXNcIjpbXCJTVVBFUkFETUlOXCIsXCJJTkZSQUFETUlOXCJdLFwic2VydmljZXNcIjpbXCJkbG1cIl0sXCJkaXNwbGF5XCI6XCJhZG1pbjFcIixcInBhc3N3b3JkXCI6XCJcIixcImFjdGl2ZVwiOnRydWUsXCJkYk1hbmFnZWRcIjpmYWxzZSxcImdyb3VwTWFuYWdlZFwiOmZhbHNlLFwidXBkYXRlZEF0XCI6MTUyOTYxNTA1MX0ifQ.rM_L4m2vTb6pN3Qlz-pmfWjC83kEc29-u6SDzBrzz_1vhNinUaYTYiyqw3ELKtsJ062BaUmIhAiiv9NDMsaHlDfCmu7QrhfG4ki6YK-idgmWUhcnJS0O0xkq4evS4oYXHlOYV9RAWAzNiD378h-9-1pk8cpqH9FFHdq3KXH9tUfXV0AWYHeDhMlvAl_948-8DfCGeVjg5aBAWXKYO8PseILXB7skF812uaf5SlqCeobHAgZ1lUT7f9ZhN_i4jUXPc-uvoQK5_NYNu3gY8H9W1ECX7BXTzSqiws2etQNYOFBgIUwtbGFcVQOrjPKJi95avEXQxi0sFko_m1sHYh7X0krf25yYcb4AU195U2TqSYJ5pfD7OjXz9XGpalVGARNDa5l5qs_La4odZ9wmqWAgi4jBm2O15a1Faz8qkUTc2IoQ-Sldcfa2POs-mK9a2Elj6fDFCQk250ysWuH-N7gr_JGFhPWwt_4Kq9fCkANTrbZC2cGMVDm6lUY0i1DSwMIu3ZbUOVpu5-5xwR267V5nDaB08jL1MRwpHLaItumbGhO5iJKIVVSfYtoiMUZEbI-LbFAKQJqBLBZdr3hwCKY9Lavd391XLIYawT_wxZgMQYHnjm1luW350jQ34nEVDa-2T28Lvt5sLiKrUjV33NgPVY-1mjtzrSdesng5ybLv7sY" "eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJhZG1pbjEiLCJpc3MiOiJLTk9YU1NPIiwiZXhwIjoxNTI5OTYwMzE5fQ.D-vHyFj4c59qbN2Eop8FLfkZ1wibL20T4vHZGFYRV8ZBwD5x3X4dk-iwvY9i88aMENpaXt9whtOUMusBLMlBUge5TLlx8jXUAZ24BuoF2D6bTWzH1CUrBa16clH2hwvXuYZnx26jfrlLCKm9qdaZF83cD8LU-GwnY3dfEWCLi-gb3JDfdWQHTEluxLF4J-E86fZ2hiKc7F2o5aaRnGJugc-uEPSuNocCYmCYCpoh55sHzjj2VVRHa4-t7-pPYQNQaCX_vjXnvjQW2UYvMZpoyMRsAcaNHTAUm0zyidJ3q7zOLzasjZx4iRYnp1ttYa2F9Cdb8FpEJ6Qh8xNoZCLM0HH9mJC8fCMrd87IOX0Gw6dP9rYe58IRIMy3pvPW3sMnIUV_mtxFLGrL7tV1i8ubpXA3kSPOpKtk-YqshWy48Q2IPabGkI5mdAHukyKpW8IZHtTcYdMKHN9p2W7nCFu03gDorxDF4MkvLVgv3LF5-RL0zaqzgjY0kQ-gwZ--8-qX8QdvdNhr2Vg88DG2GnVRZc1tXFuHZDfHfQ3nCSiyITRw8hGlPN6GidZwRyXgg8-Ku9rSqv9AbUbEUeF9_XHR4fc5G_qsBmdF5KurTYmifXF0PcVFejjl1_10kVpUeTK4J_Qg21FLHWDwd6rwCZWspaXLyUmhS0kzHymrsutSmN0" 172.22.118.100 onpremdlm SFO_DC Output: % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 29166 0 29166 0 0 93490 0 --:--:-- --:--:-- --:--:-- 93781
Found cluster with name onpremdlm in DataCenter SFO_DC having id 2
Do you want to delete this cluster? (yes / no)? yes
Deleting cluster...
true
Now the onpremdlm successfully deleted.
... View more
Labels:
11-03-2016
03:52 AM
2 Kudos
@zhixun he Yes. Whenever there is a change, snapshot will get created in source and falcon process instance will trigger based on the frequency
... View more
10-25-2016
05:29 PM
10 Kudos
HDFS Snapshots are read-only point-in-time copies of the file system. Snapshots can be taken on a subtree of the file system or the entire file system.Snapshots are very efficient because they only copy data that are changed. We can restore the data to any previous snapshot. Some common use cases of snapshots are Data backup and Disaster recovery. HDFS Snapshot Extension: Falcon will support HDFS snapshot-based replication through HDFS Snapshot extension. Using this feature, create and manage snapshots on source/target directories. Mirror data from source to target for disaster recovery using these snapshots. Perform retention on the snapshots created on source and target. Snapshot replication will only work from single source directory to single target directory. For snapshot to work, we expect users to do the following Both source and target clusters must have a version of Hadoop 2.7.0 or higher. The user submitting and scheduling the falcon extension should have permissions on both source and target directories. Both directories should be snap shotable.
To perform the HDFS Snapshot replication in Falcon, We need to create the source, target cluster entities and also need to create/give permissions to the staging and working directories. Please use the following steps to accomplish it. Source Cluster: hdfs dfs -rm -r /tmp/fs /tmp/fw
hdfs dfs -mkdir -p /tmp/fs
hdfs dfs -chmod 777 /tmp/fs
hdfs dfs -mkdir -p /tmp/fw
hdfs dfs -chmod 755 /tmp/fw
hdfs dfs -chown falcon /tmp/fs
hdfs dfs -chown falcon /tmp/fw Target Cluster : hdfs dfs -rm -r /tmp/fs /tmp/fw
hdfs dfs -mkdir -p /tmp/fs
hdfs dfs -chmod 777 /tmp/fs
hdfs dfs -mkdir -p /tmp/fw
hdfs dfs -chmod 755 /tmp/fw
hdfs dfs -chown falcon /tmp/fs
hdfs dfs -chown falcon /tmp/fw Cluster Entities: primaryCluster.xml <?xml version="1.0" encoding="UTF-8"?>
<cluster xmlns="uri:falcon:cluster:0.1" colo="USWestOregon" description="oregonHadoopCluster" name="primaryCluster">
<interfaces>
<interface type="readonly" endpoint="webhdfs://mycluster1:20070" version="0.20.2" />
<interface type="write" endpoint="hdfs://mycluster1:8020" version="0.20.2" />
<interface type="execute" endpoint="primaryCluster-12.openstacklocal:8050" version="0.20.2" />
<interface type="workflow" endpoint="http://primaryCluster-14.openstacklocal:11000/oozie" version="3.1" />
<interface type="messaging" endpoint="tcp://primaryCluster-9.openstacklocal:61616?daemon=true" version="5.1.6" />
<interface type="registry" endpoint="thrift://primaryCluster-14.openstacklocal:9083" version="0.11.0" />
</interfaces>
<locations>
<location name="staging" path="/tmp/fs" />
<location name="temp" path="/tmp" />
<location name="working" path="/tmp/fw" />
</locations>
<ACL owner="ambari-qa" group="users" permission="0755" />
<properties>
<property name="dfs.namenode.kerberos.principal" value="nn/_HOST@EXAMPLE.COM" />
<property name="hive.metastore.kerberos.principal" value="hive/_HOST@EXAMPLE.COM" />
<property name="hive.metastore.sasl.enabled" value="true" />
<property name="hadoop.rpc.protection" value="authentication" />
<property name="hive.metastore.uris" value="thrift://primaryCluster-14.openstacklocal:9083" />
<property name="hive.server2.uri" value="hive2://primaryCluster-14.openstacklocal:10000" />
</properties>
</cluster> falcon entity -submit -type cluster -file primaryCluster.xml --> primaryCluster backupCluster : <?xml version="1.0" encoding="UTF-8"?>
<cluster xmlns="uri:falcon:cluster:0.1" colo="USWestOregon" description="oregonHadoopCluster" name="backupCluster">
<interfaces>
<interface type="readonly" endpoint="webhdfs://mycluster2:20070" version="0.20.2" />
<interface type="write" endpoint="hdfs://mycluster2:8020" version="0.20.2" />
<interface type="execute" endpoint="backupCluster-5.openstacklocal:8050" version="0.20.2" />
<interface type="workflow" endpoint="http://backupCluster-6.openstacklocal:11000/oozie" version="3.1" />
<interface type="messaging" endpoint="tcp://backupCluster-1.openstacklocal:61616" version="5.1.6" />
<interface type="registry" endpoint="thrift://backupCluster-6.openstacklocal:9083" version="0.11.0" />
</interfaces>
<locations>
<location name="staging" path="/tmp/fs" />
<location name="temp" path="/tmp" />
<location name="working" path="/tmp/fw" />
</locations>
<ACL owner="ambari-qa" group="users" permission="0755" />
<properties>
<property name="dfs.namenode.kerberos.principal" value="nn/_HOST@EXAMPLE.COM" />
<property name="hive.metastore.kerberos.principal" value="hive/_HOST@EXAMPLE.COM" />
<property name="hive.metastore.sasl.enabled" value="true" />
<property name="hadoop.rpc.protection" value="authentication" />
<property name="hive.metastore.uris" value="thrift://backupCluster-6.openstacklocal:9083" />
<property name="hive.server2.uri" value="hive2://backupCluster-6.openstacklocal:10000" />
</properties>
</cluster> falcon entity -submit -type cluster -file backupCluster.xml --> backupCluster HDFS Snapshot Replication: Source: [ Create directory and copy the data] hdfs dfs -mkdir -p /tmp/falcon/HDFSSnapshot/source
hdfs dfs -put NYSE-2000-2001.tsv /tmp/falcon/HDFSSnapshot/source Note: you can download the NYSE-2000-2001.tsv file from https://s3.amazonaws.com/hw-sandbox/tutorial1/NYSE-2000-2001.tsv.gz Allow Snapshot to the directory: ddfs dfsadmin -allowSnapshot /tmp/falcon/HDFSSnapshot/source [ hdfs]
hdfs lsSnapshottableDir [ ambari-qa] Target Cluster hdfs dfs -mkdir -p /tmp/falcon/HDFSSnapshot/target
hdfs dfsadmin -allowSnapshot /tmp/falcon/HDFSSnapshot/target hdfs-snapshot.properties jobName=HDFSSnapshot
jobClusterName=primaryCluster
jobValidityStart=2016-05-09T06:25Z
jobValidityEnd=2016-05-09T08:00Z
jobFrequency=days(1)
sourceCluster=primaryCluster
sourceSnapshotDir=/tmp/falcon/HDFSSnapshot/source
sourceSnapshotRetentionAgeLimit=days(1)
sourceSnapshotRetentionNumber=3
targetCluster=backupCluster
targetSnapshotDir=/tmp/falcon/HDFSSnapshot/target
targetSnapshotRetentionAgeLimit=days(1)
targetSnapshotRetentionNumber=3
jobAclOwner=ambari-qa
jobAclGroup=users
jobAclPermission="0x755" Submit And schedule the job using the property file: falcon extension -extensionName hdfs-snapshot-mirroring -submitAndSchedule -file hdfs-snapshot.properties By using the jobName we can find the oozie job it has launched falcon extension -instances -jobName HDFSSnapshot Once the job is completed, we can see in source the snapshot will be automatically created and snapshot along with source content are replicated in the target cluster : Source Cluster HDFS Content: hdfs dfs -ls -R hdfs://mycluster1:8020//tmp/falcon/HDFSSnapshot/source/ drwxr-xr-x - ambari-qa hdfs 0 2016-10-25 02:27 hdfs://mycluster1:8020/tmp/falcon/HDFSSnapshot/source/source -rw-r--r-- 3 ambari-qa hdfs 44005963 2016-10-25 02:27 hdfs://mycluster1:8020/tmp/falcon/HDFSSnapshot/source/source/NYSE-2000-2001.tsv Target Cluster HDFS Content: hdfs dfs -ls -R hdfs://mycluster2:8020//tmp/falcon-HDFSSnapshot/target/ drwxr-xr-x - ambari-qa hdfs 0 2016-10-25 02:28 hdfs://mycluster2:8020/tmp/falcon/HDFSSnapshot/target/source -rw-r--r-- 3 ambari-qa hdfs 44005963 2016-10-25 02:28 hdfs://mycluster2:8020/tmp/falcon/HDFSSnapshot/target/source/NYSE-2000-2001.tsv We can see the data has been replicated from source to target cluster. Source Snapshot Directory: hdfs dfs -ls hdfs://mycluster1:8020//tmp/falcon/HDFSSnapshot/source/.snapshot Found 1 items drwxr-xr-x - ambari-qa hdfs 0 2016-10-25 02:27 hdfs://mycluster1:8020/tmp/falcon/HDFSSnapshot/source/.snapshot/falcon-snapshot-HDFSSnapshot-2016-05-09-06-25-1477362461509 Target Snapshot Directory: hdfs dfs -ls hdfs://mycluster2:8020//tmp/falcon/HDFSSnapshot/target/.snapshot Found 1 itemsdrwxr-xr-x - ambari-qa hdfs 0 2016-10-25 02:28 hdfs://mycluster2:8020/tmp/falcon/HDFSSnapshot/target/.snapshot/falcon-snapshot-HDFSSnapshot-2016-05-09-06-25-1477362461509 We can see the snapshot directory has been automatically created in source and also replicated from source to target cluster.
... View more
Labels:
09-26-2016
10:30 AM
1 Kudo
Good descriptive article on how to install Atlas HA via Ambari
... View more