Member since
04-03-2019
962
Posts
1743
Kudos Received
146
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 17736 | 03-08-2019 06:33 PM | |
| 7165 | 02-15-2019 08:47 PM |
07-21-2016
11:40 PM
2 Kudos
This tutorial has been successfully tried on HDP-2.4.2.0 and Ambari 2.2.2.0 . I have my HDP Cluster Kerberized with Namenode HA. . Please follow below steps for Configuring File View on Kerberized HDP Cluster. . Step 1 - Please configure your Ambari Server for Kerberos with the steps mentioned in below article. Please follow steps 1 to 5. https://community.hortonworks.com/articles/40635/configure-tez-view-for-kerberized-hdp-cluster.html . Step 2 - Please add below properties to core-site.xml via Ambari UI and restart required services. . Note - If you are running Ambari Server as root user then add below properties hadoop.proxyuser.root.groups=*
hadoop.proxyuser.root.hosts=* . If you are running Ambari server as non-root user then please add below properties in core-site.xml hadoop.proxyuser.<ambari-server-user>.groups=*
hadoop.proxyuser.<ambari-server-user>.hosts=* Please replace <ambari-server-user> with user running Ambari Server in above example. . I'm assuming that your ambari server principal is ambari-server@REALM.COM, if not then please replace 'ambari-server' with your principal's user part. hadoop.proxyuser.ambari-server.groups=*
hadoop.proxyuser.ambari-server.hosts=* . Step 3 - Create user directory on hdfs for the user accessing file view. For e.g. in my case I'm using admin user to access file view. . sudo -u hdfs hadoop fs -mkdir /user/admin
sudo -u hdfs hadoop fs -chown admin:hdfs /user/admin
sudo -u hdfs hadoop fs -chmod 755 /user/admin . Step 4 - Goto Admin tab --> Click on Manage Ambari --> Views --> Edit File view ( Create a new one if it doesn't exist already ) and configure settings as given below . Note - You may need to modify values as per your environment settings! . . After above steps, you should be able to access your file view without any issues. If you receive any error(s) then please check /var/log/ambari-server/ambari-server.log for more details and troubleshooting. . . Please comment if you have any feedback/questions/suggestions. Happy Hadooping!!
... View more
Labels:
06-24-2016
06:36 AM
8 Kudos
This tutorial has been successfully tried on HDP-2.4.0.0 and Ambari 2.2.1.0 . I have my HDP Cluster Kerberized and Ambari has been configured for SSL. Note - Steps are same for Ambari with or without SSL. . Please follow below steps for Configuring Pig View on Kerberized HDP Cluster. . Step 1 - Please configure your Ambari Server for Kerberos with the steps mentioned in below article. Please follow steps 1 to 5. https://community.hortonworks.com/articles/40635/configure-tez-view-for-kerberized-hdp-cluster.html . Step 2 - Please add below properties to core-site.xml via Ambari UI and restart required services. . Note - If you are running Ambari Server as root user then add below properties hadoop.proxyuser.root.groups=*hadoop.proxyuser.root.hosts=* . If you are running Ambari server as non-root user then please add below properties in core-site.xml hadoop.proxyuser.<ambari-server-user>.groups=*hadoop.proxyuser.<ambari-server-user>.hosts=* Please replace <ambari-server-user> with user running Ambari Server in above example. . I'm assuming that your ambari server principal is ambari-server@REALM.COM, if not then please replace 'ambari-server' with your principal's user part. hadoop.proxyuser.ambari-server.groups=*hadoop.proxyuser.ambari-server.hosts=* . Step 3 - Create user directory on hdfs for the user accessing pig view. For e.g. in my case I'm using admin user to access pig view. . sudo -u hdfs hadoop fs -mkdir /user/admin sudo -u hdfs hadoop fs -chown admin:hdfs /user/adminsudo -u hdfs hadoop fs -chmod 755/user/admin . Step 4 - Goto Admin tab --> Click on Manage Ambari --> Views --> Edit Pig view ( Create a new one if it doesn't exist already ) and configure settings as given below . Note - You may need to modify values as per your environment settings! . After above steps, you should be able to access your pig view without any issues. If you receive any error(s) then please check /var/log/ambari-server/ambari-server.log for more details and troubleshooting.
... View more
Labels:
06-20-2016
04:43 AM
5 Kudos
This tutorial has been successfully tried on HDP-2.4.0.0 and Ambari 2.2.1.0 . I have my HDP Cluster Kerberized and Ambari has been configured for SSL. Note - Steps are same for Ambari with or without SSL. . Please follow below steps for Configuring Hive View on Kerberized HDP Cluster. . Step 1 - Please configure your Ambari Server for Kerberos with the steps mentioned in below article. Please follow steps 1 to 5. https://community.hortonworks.com/articles/40635/configure-tez-view-for-kerberized-hdp-cluster.html . Step 2 - Please add below properties to core-site.xml via Ambari UI and restart required services. . Note - If you are running Ambari Server as root user then add below properties hadoop.proxyuser.root.groups=*
hadoop.proxyuser.root.hosts=* . If you are running Ambari server as non-root user then please add below properties in core-site.xml hadoop.proxyuser.<ambari-server-user>.groups=*
hadoop.proxyuser.<ambari-server-user>.hosts=* Please replace <ambari-server-user> with user running Ambari Server in above example. . I'm assuming that your ambari server principal is ambari-server@REALM.COM, if not then please replace 'ambari-server' with your principal's user part. hadoop.proxyuser.ambari-server.groups=*
hadoop.proxyuser.ambari-server.hosts=* . Step 3 - Create user directory on hdfs for the user accessing hive view. For e.g. in my case I'm using admin user to access hive view. . sudo -u hdfs hadoop fs -mkdir /user/admin
sudo -u hdfs hadoop fs -chown admin:hdfs /user/admin
sudo -u hdfs hadoop fs -chmod 755 /user/admin . Step 4 - Goto Admin tab --> Click on Manage Ambari --> Views --> Edit Hive view ( Create a new one if it doesn't exist already ) and configure settings as given below . Note - You may need to modify values as per your environment settings! . . After above steps, you should be able to access your hive view without any issues. If you receive any error(s) then please check /var/log/ambari-server/ambari-server.log for more details and troubleshooting. . . Please comment if you have any feedback/questions/suggestions. Happy Hadooping!!
... View more
06-19-2016
08:15 PM
4 Kudos
Note - For this tutorial I assume that you already have HDP cluster kerberized. If you want to configure automated kerberos then please refer to https://community.hortonworks.com/articles/29203/automated-kerberos-installation-and-configuration.html. . First you need to configure your Ambari Server for Kerberos configuration, without configuring Ambari for Kerberos, views will not work. . Use below commands to configure Ambari for Kerberos: . 1 Login to kadmin and create principal for Ambari Server
Note - Please replace your REALM in REALM.COM addprinc -randkey ambari-server@REALM.COM . 2 Extract principal created in above step to keytab file xst -k ambari.server.keytab ambari-server@REALM.COM . 3 Above command should generate keytab file in current working directory, copy keytab file to /etc/security/keytabs/ location cp ambari.server.keytab /etc/security/keytabs/ambari.server.keytab Note - Please make sure that, user running ambari server daemon should have read access to ambari.server.keytab . 4 Stop Ambari server daemon and setup security using below command. service ambari-server stop
ambari-server setup-security . 5 Select 3 for Setup Ambari kerberos JAAS configuration.
Enter the Kerberos principal name for the Ambari Server you set up in step 1.1. Enter the path to the keytab for the Ambari principal.
Restart Ambari Server. ambari-server restart . 6. Add below properties in yarn-site.xml using Ambari UI. yarn.timeline-service.http-authentication.proxyuser.ambari-server.hosts=*
yarn.timeline-service.http-authentication.proxyuser.ambari-server.users=*
yarn.timeline-service.http-authentication.proxyuser.ambari-server.groups=* . 7. Add below properties in core-site.xml hadoop.proxyuser.ambari-server.hosts=*
hadoop.proxyuser.ambari-server.groups=*
hadoop.proxyuser.admin.hosts=*
hadoop.proxyuser.admin.groups=* Your TEZ view should be accessible now! . . Happy Hadooping!! 🙂
... View more
Labels:
06-17-2016
12:42 PM
Good stuff @Sagar Shimpi 🙂
... View more
05-23-2016
07:49 PM
12 Kudos
Please follow below steps to setup Oozie HA configuration with Kerberos environment. .
Step 1: Configure mysql/oracle database for Oozie as HA configuration does not work with default embedded Derby Database. . Please refer https://community.hortonworks.com/articles/183/moving-oozie-to-mysql-with-ambari.html for steps to migrate Oozie database. . Step 2: Login to Ambari UI, goto hosts, select host on which you need to add additional Oozie server, Click on Add and select Oozie server. . Please refer below screenshot, for e.g. I will add oozie server on kk3.hwxblr.com . Step 3: Setup Load balancer Please refer this blogpost for setting up lightweight open source linux based load balancer. . Step4: Configure Kerberos for your cluster if not already done. Please refer our blog for automated Kerberos configuration. . Step 5: Login to Ambari UI and set below configuration parameters for Oozie service. oozie.zookeeper.connection.string=<zookeeper1>:2181,<zookeeper2>:2181,<zookeeper3>:2181
oozie.services.ext=org.apache.oozie.service.ZKLocksService,org.apache.oozie.service.ZKXLogStreamingService,org.apache.oozie.service.ZKJobsConcurrencyService
oozie.base.url=http://<loadbalancer.hostname>:11000/oozie
oozie.authentication.kerberos.principal=* . Step 6: In oozie-env section of Oozie configuration, uncomment OOZIE_BASE_URL property and set it to http://<load-balancer-host>:11000/oozie for example: export OOZIE_BASE_URL="http://<loadbalance.hostname>:11000/oozie" . Step 7: Login to your KDC and create HTTP principal for load balancer. kadmin.local -q "addprinc -randkey HTTP/<loadbalancer_hostname>@<realm>" . Step 8: Create a single spnego.service.keytab with both Oozie server's + Load balancer's principal and distribute the same on both the Oozie servers. For example: In my case I have test1-ambari-server.hwxblr.com as loadbalancer and kk2/kk4 are my oozie servers [root@kk4 ~]# klist -ket /etc/security/keytabs/spnego.service.keytab
Keytab name: FILE:/etc/security/keytabs/spnego.service.keytab
KVNO Timestamp Principal
---- ----------------- --------------------------------------------------------
3 05/03/16 16:42:43 HTTP/kk4.hwxblr.com@HWX.COM (aes256-cts-hmac-sha1-96)
3 05/03/16 16:42:43 HTTP/kk4.hwxblr.com@HWX.COM (aes128-cts-hmac-sha1-96)
3 05/03/16 16:42:43 HTTP/kk4.hwxblr.com@HWX.COM (des3-cbc-sha1)
3 05/03/16 16:42:43 HTTP/kk4.hwxblr.com@HWX.COM (arcfour-hmac)
3 05/03/16 16:44:05 HTTP/kk2.hwxblr.com@HWX.COM (aes256-cts-hmac-sha1-96)
3 05/03/16 16:44:05 HTTP/kk2.hwxblr.com@HWX.COM (aes128-cts-hmac-sha1-96)
3 05/03/16 16:44:05 HTTP/kk2.hwxblr.com@HWX.COM (des3-cbc-sha1)
3 05/03/16 16:44:05 HTTP/kk2.hwxblr.com@HWX.COM (arcfour-hmac)
4 05/03/16 16:43:18 HTTP/test1-ambari-server.hwxblr.com@HWX.COM (aes256-cts-hmac-sha1-96)
4 05/03/16 16:43:18 HTTP/test1-ambari-server.hwxblr.com@HWX.COM (aes128-cts-hmac-sha1-96)
4 05/03/16 16:43:18 HTTP/test1-ambari-server.hwxblr.com@HWX.COM (des3-cbc-sha1)
4 05/03/16 16:43:18 HTTP/test1-ambari-server.hwxblr.com@HWX.COM (arcfour-hmac) . Step 9: Make sure you have saved updated keytab on both the Oozie hosts. . Step 10: Restart Oozie services via Ambari UI
. Step 11: Configure your browser for spnego authentication using steps given at below URLs http://www.ghostar.org/2015/06/google-chrome-spnego-and-webhdfs-on-hadoop/ http://www.microhowto.info/howto/configure_firefox_to_authenticate_using_spnego_and_kerberos.html . Step 12: Hit http://<load-balancer-hostname>:11000/oozie and you should be able to see oozie UI
. . Please comment if you have any feedback/questions/suggestions. Happy Hadooping!!
... View more
Labels:
05-22-2016
11:08 AM
15 Kudos
I have written ambari-admin utility to simplify efforts required to find and trigger API curl call. I'm planning to add many more features than ambari-shell. Currently below features are supported. Demo on multinode cluster: 1. Clone our github repo to your local machine or any of the node in your cluster.
[root@sme-ambari-server ~]# git clone https://github.com/crazyadmins/useful-scripts.git
Initialized empty Git repository in /root/useful-scripts/.git/
remote: Counting objects: 106, done.
remote: Total 106 (delta 0), reused 0 (delta 0), pack-reused 106
Receiving objects: 100% (106/106), 16.89 KiB, done.
Resolving deltas: 100% (37/37), done.
2. Goto useful-scripts/ambari/
[root@sme-ambari-server ~]# cd useful-scripts/ambari/
3. Edit ambari.props and modify value of below parameters as per your cluster environment
[root@sme-ambari-server ambari]# cat ambari.props
CLUSTER_NAME=sme
AMBARI_ADMIN_USER=admin
AMBARI_ADMIN_PASSWORD=admin
AMBARI_HOST=sme-ambari-server.hwxblr.com
KDC_HOST=sme-ambari-server.hwxblr.com
REALM=HWX.COM
KERBEROS_CLIENTS=sme-ambari-server.hwxblr.com,kknew1.hwxblr.com,kknew2.hwxblr.com,kknew3.hwxblr.com
##### Notes #####
#1. KERBEROS_CLIENTS - Comma separated list of Kerberos clients in case of multinode cluster
#2. Admin princial is admin/admin and password is hadoop
Note - You can ignore Kerberos related parameters for now, in future this script will have functionality to setup Kerberos. 4. To print usage you can run ambari-admin.sh without any argument
Please refer first screenshot provided at the beginning of this article. 5. Stop All services
[root@sme-ambari-server ambari]# ./ambari-admin.sh stopall
HTTP/1.1 202 Accepted
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
User: admin
Set-Cookie: AMBARISESSIONID=14oisfe8i5bclm8tdk3npm390;Path=/;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: text/plain
Vary: Accept-Encoding, User-Agent
Content-Length: 152
Server: Jetty(8.1.17.v20150415)
{
"href" : "http://sme-ambari-server.hwxblr.com:8080/api/v1/clusters/sme/requests/61",
"Requests" : {
"id" : 61,
"status" : "Accepted"
}
6. Start all the services
[root@sme-ambari-server ambari]# ./ambari-admin.sh startall
HTTP/1.1 202 Accepted
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
User: admin
Set-Cookie: AMBARISESSIONID=1lo2x6u1r5xq319suwh8xiiquw;Path=/;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: text/plain
Vary: Accept-Encoding, User-Agent
Content-Length: 152
Server: Jetty(8.1.17.v20150415)
{
"href" : "http://sme-ambari-server.hwxblr.com:8080/api/v1/clusters/sme/requests/62",
"Requests" : {
"id" : 62,
"status" : "Accepted"
}
7. Get list of all services installed in your cluster, script also shows list of installed components host_wise
[root@sme-ambari-server ambari]# ./ambari-admin.sh listall
Below is the list of installed services in your cluster:
HDFS
MAPREDUCE2
SMARTSENSE
TEZ
YARN
ZOOKEEPER
########################### List of Host-wise installed components ###########################
kknew1.hwxblr.com
"component_name" | "DATANODE"
"component_name" | "HDFS_CLIENT"
"component_name" | "HST_AGENT"
"component_name" | "HST_SERVER"
"component_name" | "MAPREDUCE2_CLIENT"
"component_name" | "NAMENODE"
"component_name" | "NODEMANAGER"
"component_name" | "YARN_CLIENT"
"component_name" | "ZOOKEEPER_CLIENT"
kknew2.hwxblr.com
"component_name" | "DATANODE"
"component_name" | "HDFS_CLIENT"
"component_name" | "HST_AGENT"
"component_name" | "MAPREDUCE2_CLIENT"
"component_name" | "NODEMANAGER"
"component_name" | "SECONDARY_NAMENODE"
"component_name" | "YARN_CLIENT"
"component_name" | "ZOOKEEPER_CLIENT"
"component_name" | "ZOOKEEPER_SERVER"
kknew3.hwxblr.com
"component_name" | "APP_TIMELINE_SERVER"
"component_name" | "DATANODE"
"component_name" | "HDFS_CLIENT"
"component_name" | "HISTORYSERVER"
"component_name" | "HST_AGENT"
"component_name" | "MAPREDUCE2_CLIENT"
"component_name" | "NODEMANAGER"
"component_name" | "RESOURCEMANAGER"
"component_name" | "TEZ_CLIENT"
"component_name" | "YARN_CLIENT"
"component_name" | "ZOOKEEPER_CLIENT"
"component_name" | "ZOOKEEPER_SERVER"
sme-ambari-server.hwxblr.com
"component_name" | "HDFS_CLIENT"
"component_name" | "HST_AGENT"
"component_name" | "MAPREDUCE2_CLIENT"
"component_name" | "TEZ_CLIENT"
"component_name" | "YARN_CLIENT"
"component_name" | "ZOOKEEPER_CLIENT"
8. Stop individual service
[root@sme-ambari-server ambari]# ./ambari-admin.sh stop yarn
HTTP/1.1 202 Accepted
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
User: admin
Set-Cookie: AMBARISESSIONID=10tyime2kd7pr1e0o4t8gwg2jv;Path=/;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: text/plain
Vary: Accept-Encoding, User-Agent
Content-Length: 152
Server: Jetty(8.1.17.v20150415)
{
"href" : "http://sme-ambari-server.hwxblr.com:8080/api/v1/clusters/sme/requests/63",
"Requests" : {
"id" : 63,
"status" : "Accepted"
}
}
9. Start Individual service
[root@sme-ambari-server ambari]# ./ambari-admin.sh start yarn
HTTP/1.1 202 Accepted
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
User: admin
Set-Cookie: AMBARISESSIONID=a7wdqn56clk8176d99rm20hz5;Path=/;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: text/plain
Vary: Accept-Encoding, User-Agent
Content-Length: 152
Server: Jetty(8.1.17.v20150415)
{
"href" : "http://sme-ambari-server.hwxblr.com:8080/api/v1/clusters/sme/requests/64",
"Requests" : {
"id" : 64,
"status" : "Accepted"
}
}
10. Stop Individual service component
[root@sme-ambari-server ambari]# ./ambari-admin.sh stop hst_agent kknew2.hwxblr.com
HTTP/1.1 202 Accepted
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
User: admin
Set-Cookie: AMBARISESSIONID=vrmnhicrgog42lmu7exfqdm4;Path=/;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: text/plain
Vary: Accept-Encoding, User-Agent
Content-Length: 152
Server: Jetty(8.1.17.v20150415)
{
"href" : "http://sme-ambari-server.hwxblr.com:8080/api/v1/clusters/sme/requests/65",
"Requests" : {
"id" : 65,
"status" : "Accepted"
}
}
11. Start Individual service component
[root@sme-ambari-server ambari]# ./ambari-admin.sh start hst_agent kknew2.hwxblr.com
HTTP/1.1 202 Accepted
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
User: admin
Set-Cookie: AMBARISESSIONID=1fqqp5vqpourgjll9ydnyev3e;Path=/;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: text/plain
Vary: Accept-Encoding, User-Agent
Content-Length: 152
Server: Jetty(8.1.17.v20150415)
{
"href" : "http://sme-ambari-server.hwxblr.com:8080/api/v1/clusters/sme/requests/66",
"Requests" : {
"id" : 66,
"status" : "Accepted"
}
}
12. Remove any Hadoop client from any of the host
[root@sme-ambari-server ambari]# ./ambari-admin.sh remove tez_client kknew3.hwxblr.com
13. Add any Hadoop client on any of the host
[root@sme-ambari-server ambari]# ./ambari-admin.sh add tez_client kknew3.hwxblr.com
HTTP/1.1 201 Created
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
User: admin
Set-Cookie: AMBARISESSIONID=pl31hyai9aqt1eyeaj5ehe9i;Path=/;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: text/plain
Content-Length: 0
Server: Jetty(8.1.17.v20150415)
HTTP/1.1 202 Accepted
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
User: admin
Set-Cookie: AMBARISESSIONID=18tzs8uctj2061pmjegp129aqz;Path=/;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: text/plain
Vary: Accept-Encoding, User-Agent
Content-Length: 152
Server: Jetty(8.1.17.v20150415)
{
"href" : "http://sme-ambari-server.hwxblr.com:8080/api/v1/clusters/sme/requests/67",
"Requests" : {
"id" : 67,
"status" : "Accepted"
}
}
Sleeping for 5 seconds before starting TEZ_CLIENT
HTTP/1.1 200 OK
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
User: admin
Set-Cookie: AMBARISESSIONID=1bh3u5foki8vh1fgg240i49h6x;Path=/;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: text/plain
Content-Length: 0
Server: Jetty(8.1.17.v20150415)
14. Backup database for Hive/Oozie/Ambari
Note - Please enter your database password, for e.g. in this demo I have entered default password('bigdata') for Ambari postgresql DB.
[root@sme-ambari-server ambari]# ./ambari-admin.sh backup ambari postgresql sme-ambari-server.hwxblr.com
Password:
[root@sme-ambari-server ambari]# ls -lrt ~/ambari_db_backup_2016_05_22_11_32.sql
-rw-r--r-- 1 root root 6812385 May 22 11:33 /root/ambari_db_backup_2016_05_22_11_32.sql
[root@sme-ambari-server ambari]# date
Sun May 22 11:33:20 UTC 2016
[root@sme-ambari-server ambari]#
Note - I'm planning to add below features to this script, please feel to provide your feedback on this 🙂 Please feel free to suggest if you would like to have any more features! Stay Tuned for Part-2 Happy Hadooping!! 🙂
... View more
Labels:
05-22-2016
10:10 AM
@shivom mali - You can use same script, only you need to add comma separate hosts in ambari.properties for KERBEROS_CLIENTS
... View more
05-08-2016
07:42 PM
@Richard Guo - Thank you. Yes for AD, work is in progress. I will update you once done
... View more