Member since
04-03-2019
962
Posts
1743
Kudos Received
146
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 14993 | 03-08-2019 06:33 PM | |
| 6176 | 02-15-2019 08:47 PM | |
| 5098 | 09-26-2018 06:02 PM | |
| 12588 | 09-07-2018 10:33 PM | |
| 7446 | 04-25-2018 01:55 AM |
12-09-2016
07:36 PM
1 Kudo
@shyam gurram - If you are not using Falcon. Just remove it. You can always install it later
... View more
12-09-2016
09:57 AM
Customer confirmed that this worked.
... View more
12-09-2016
09:05 AM
@Joshua Adeleke Can you get complete application logs for application_1474363291123_56265.
... View more
12-08-2016
02:29 PM
3 Kudos
@Joshua Adeleke -Can you please look into stderr section of oozie launcher logs? If possible, please attach complete oozie launcher logs in this thread
... View more
12-06-2016
07:28 PM
5 Kudos
In previous post we have seen how to Automate HDP installation with Namenode HA using Ambari Blueprints. In this post, we will see how to Deploy single node HDP Cluster with Kerberos authentication via Ambari blueprint . Note - For Ambari 2.6.X onwards, we will have to register VDF to register internal repository, or else Ambari will pick up latest version of HDP and use the public repos. please see below document for more information. For Ambari version less than 2.6.X, this guide will work without any modifications. Document - https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-release-notes/content/ambari_relnotes-2.6.0.0-behavioral-changes.html . Below are simple steps to install HDP single node cluster with Kerberos Authentication(MIT KDC) using internal repository via Ambari Blueprints. . Step 1: Install Ambari server using steps mentioned under below link http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.1/bk_Installing_HDP_AMB/content/_download_the_ambari_repo_lnx6.html . Step 2: Register ambari-agent manually Install ambari-agent package on all the nodes in the cluster and modify hostname to ambari server host(fqdn) in /etc/ambari-agent/conf/ambari-agent.ini . Step 3: Install and configure MIT KDC Detailed Steps(Demo on HDP Sandbox 2.4): 3.1 Clone our github repository Ambari server in your HDP Cluster Note - This script will install and configure KDC on your Ambari Server. git clone https://github.com/crazyadmins/useful-scripts.git Sample Output: [root@sandbox ~]# git clone https://github.com/crazyadmins/useful-scripts.git
Initialized empty Git repository in /root/useful-scripts/.git/
remote: Counting objects: 29, done.
remote: Compressing objects: 100% (25/25), done.
remote: Total 29 (delta 4), reused 25 (delta 3), pack-reused 0
Unpacking objects: 100% (29/29), done.
2. Goto useful-scripts/ambari directory [root@sandbox ~]# cd useful-scripts/ambari/
[root@sandbox ambari]# ls -lrt
total 16
-rw-r--r-- 1 root root 5701 2016-04-23 20:33 setup_kerberos.sh
-rw-r--r-- 1 root root 748 2016-04-23 20:33 README
-rw-r--r-- 1 root root 366 2016-04-23 20:33 ambari.props
[root@sandbox ambari]# 3. Copy setup_only_kdc.sh and ambari.props to the host where you want to setup KDC Server 4. Edit and modify ambari.props file according to your cluster environment Note - In case of multinode cluster, Please don't forget to add comma separated list of hosts as a value of KERBEROS_CLIENTS variable(Not applicable for this post). Sample output for my Sandbox [root@sandbox ambari]# cat ambari.props
CLUSTER_NAME=Sandbox
AMBARI_ADMIN_USER=admin
AMBARI_ADMIN_PASSWORD=admin
AMBARI_HOST=sandbox.hortonworks.com
KDC_HOST=sandbox.hortonworks.com
REALM=HWX.COM
KERBEROS_CLIENTS=sandbox.hortonworks.com
##### Notes #####
#1. KERBEROS_CLIENTS - Comma separated list of Kerberos clients in case of multinode cluster
#2. Admin princial is admin/admin and password is hadoop
[root@sandbox ambari]# 5. Start installation by simply executing setup_only_kdc.sh Notes: 1. Please run setup_only_kdc.sh from KDC_HOST only, you don’t need to setup or configure KDC, this script will do everything for you. . Step 4: Configure blueprints Please follow below steps to create Blueprints . 4.1 Create hostmapping.json file as shown below: Note – This file will have information related to all the hosts which are part of your HDP cluster. {
"blueprint" : "hdptest",
"default_password" : "hadoop",
"host_groups" :[
{
"name" : "bluetest",
"hosts" : [
{
"fqdn" : "bluetest.openstacklocal"
}
]
}
],
"credentials" : [
{
"alias" : "kdc.admin.credential",
"principal" : "admin/admin",
"key" : "hadoop",
"type" : "TEMPORARY"
}
],
"security" : {
"type" : "KERBEROS"
},
"Clusters" : {"cluster_name":"kerberosCluster"}
} 4.2 Create cluster_configuration.json file, it contents mapping of hosts to HDP components {
"configurations": [{
"kerberos-env": {
"properties_attributes": {},
"properties": {
"realm": "HWX.COM",
"kdc_type": "mit-kdc",
"kdc_host": "bluetest.openstacklocal",
"admin_server_host": "bluetest.openstacklocal"
}
}
}, {
"krb5-conf": {
"properties_attributes": {},
"properties": {
"domains": "HWX.COM",
"manage_krb5_conf": "true"
}
}
}],
"host_groups": [{
"name": "bluetest",
"components": [{
"name": "NAMENODE"
}, {
"name": "NODEMANAGER"
}, {
"name": "DATANODE"
}, {
"name": "ZOOKEEPER_CLIENT"
}, {
"name": "HDFS_CLIENT"
}, {
"name": "YARN_CLIENT"
}, {
"name": "MAPREDUCE2_CLIENT"
}, {
"name": "ZOOKEEPER_SERVER"
}, {
"name": "SECONDARY_NAMENODE"
}, {
"name": "RESOURCEMANAGER"
}, {
"name": "APP_TIMELINE_SERVER"
}, {
"name": "HISTORYSERVER"
}],
"cardinality": 1
}],
"Blueprints": {
"blueprint_name": "hdptest",
"stack_name": "HDP",
"stack_version": "2.4",
"security": {
"type": "KERBEROS"
}
}
} . Step 5: Create an internal repository map . 5.1: hdp repository – copy below contents, modify base_url to add hostname/ip-address of your internal repository server and save it in repo.json file. {
"Repositories" : {
"base_url" : "http://172.26.64.249/hdp/centos6/HDP-2.4.2.0/",
"verify_base_url" : true
}
} . 5.2: hdp-utils repository – copy below contents, modify base_url to add hostname/ip-address of your internal repository server and save it in hdputils-repo.json file. {
"Repositories" : {
"base_url" : "http://172.26.64.249/hdp/centos6/HDP-UTILS-1.1.0.20/",
"verify_base_url" : true
}
} . Step 6: Register blueprint with Ambari server by executing below command curl -H "X-Requested-By: ambari" -X POST -u admin:admin http://<ambari-server>:8080/api/v1/blueprints/multinode-hdp -d @cluster_config.json . Step 7: Setup Internal repo via REST API. Execute below curl calls to setup internal repositories. curl -H "X-Requested-By: ambari"-X PUT -u admin:admin http://<ambari-server-hostname>:8080/api/v1/stacks/HDP/versions/2.4/operating_systems/redhat6/repositories/HDP-2.4 -d @repo.json
curl -H "X-Requested-By: ambari"-X PUT -u admin:admin http://<ambari-server-hostname>:8080/api/v1/stacks/HDP/versions/2.4/operating_systems/redhat6/repositories/HDP-UTILS-1.1.0.20 -d @hdputils-repo.json . Step 8: Pull the trigger! Below command will start cluster installation. curl -H "X-Requested-By: ambari" -X POST -u admin:admin http://<ambari-server-hostname>:8080/api/v1/clusters/multinode-hdp -d @hostmap.json . Please refer Next Part for Automated HDP installation using Ambari blueprint with Kerberos authentication for multi-node cluster. . Please feel free to comment if you need any further help on this. Happy Hadooping!!
... View more
Labels:
12-02-2016
02:30 PM
3 Kudos
@Saurabh I have resolved this kind of error for multiple customers by following below steps: #Command 1: hadoop fs -put /usr/hdp/current/atlas-server/hook/hive/* hdfs://<NN>/user/oozie/share/lib/lib_<Timestamp>/hive/ #Command 2(Please run below command on Oozie server as 'oozie' user): oozie admin -oozie http://<oozie-server:11000/oozie -sharelibupdate Re-run your Oozie workflow, It should succeed without any issue. Hope this helps! Note - Update Oozie sharelib part is missing in the stackoverflow's answer.
... View more
12-01-2016
09:27 PM
3 Kudos
@Karan Alang I can see that you have mentioned IP address under custom settings. Please do not use IP addresses anywhere, Kerberos authentication doesn't like IP addresses. Also Please select custom setting where you have selected local cluster under 'Cluster Management' section. I would suggest you double check the settings with my article. I can see that YARN resource manager and ATS properties are also unset in your PDF. Hope this information helps.
... View more
12-01-2016
08:05 PM
1 Kudo
@Karan Alang - Can you please confirm if you have configured Ambari for Kerberos authentication? Can you please paste me the screenshot of your hive view settings? also if possible please attach ambari-server.log to this thread.
... View more
12-01-2016
04:23 PM
2 Kudos
@Sanaz Janbakhsh
Can you please login to Namenode machine and check if Namenode process is running? ps -ef|grep -i namenode If yes, can you please check if it is listening on 8020 port by running netstat command? netstat -tulapn|grep 8020 If its listening then there seems to be connectivity issues from Datanode machine to Namenode machine, I guess given logs are from Datanode
... View more
12-01-2016
12:27 PM
3 Kudos
After completing Ambari upgrade from Ambari 1.7 to Ambari 2.0.2, Ambari failed to start because of below error(s): [root@xyz ~]# service ambari-server start
Using python /usr/bin/python2.6
Starting ambari-server
Ambari Server running with administrator privileges.
Organizing resource files at /var/lib/ambari-server/resources...
Server PID at: /var/run/ambari-server/ambari-server.pid
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
Waiting for server start....................
ERROR: Exiting with exit code -1.
REASON: Ambari Server java process died with exitcode 255. Check /var/log/ambari-server/ambari-server.out for more information. . If you check /var/log/ambari-server/ambari-server.out logs, you get below error(s): [root@xyz ~]# cat /var/log/ambari-server/ambari-server.out
[EL Warning]: metadata: 2016-12-01 10:01:23.436--ServerSession(306824177)--The reference column name [resource_type_id] mapped on the element [field permissions] does not correspond to a valid id or basic field/column on the mapping reference. Will use referenced column name as provided.
[EL Info]: 2016-12-01 10:01:24.412--ServerSession(306824177)--EclipseLink, version: Eclipse Persistence Services - 2.5.2.v20140319-9ad6abd
[EL Info]: connection: 2016-12-01 10:01:24.603--ServerSession(306824177)--file:/usr/lib/ambari-server/ambari-server-2.0.2.25.jar_ambari-server_url=jdbc:postgresql://localhost/ambari_user=ambari login successful . Note - Above error is complex and confusing. You should always check /var/log/ambari-server/ambari-server.log. In my case, I was getting below error in ambari-server.log org.apache.ambari.server.StackAccessException: Stack data, stackName=HDP, stackVersion=2.2, serviceName=SMARTSENSE
at org.apache.ambari.server.api.services.AmbariMetaInfo.getService(AmbariMetaInfo.java:497)
at org.apache.ambari.server.api.services.AmbariMetaInfo.getComponent(AmbariMetaInfo.java:265)
at org.apache.ambari.server.controller.utilities.DatabaseChecker.checkDBConsistency(DatabaseChecker.java:96)
at org.apache.ambari.server.controller.AmbariServer.run(AmbariServer.java:217)
at org.apache.ambari.server.controller.AmbariServer.main(AmbariServer.java:665) . Why this has happened? This is one of the Known issue - Sometimes stack component for SmartSense gets removed from Ambari DB after an upgrade. . How to resolve this? 1. Check what version of SmartSense you have installed on your cluster. 2. Download RPM from Customer Support portal 3. Run below command to re-add the SmartSense view and Stack to the Ambari DB. [root@xyz crazyadmins.com]# hst add-to-ambari
Enter SmartSense distributable path: /home/kuldeepk/smartsense-hst-1.2.1.0-161.x86_64.rpm
Added SmartSense service definition to Ambari 4. Restart 'ambari-server'. [root@xyz crazyadmins.com]# ambari-server restart
Using python /usr/bin/python
Restarting ambari-server
Using python /usr/bin/python
Stopping ambari-server
Ambari Server is not running
Using python /usr/bin/python
Starting ambari-server
Ambari Server running with administrator privileges.
Organizing resource files at /var/lib/ambari-server/resources...
Server PID at: /var/run/ambari-server/ambari-server.pid
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
Waiting for server start....................
Ambari Server 'start' completed successfully.
[root@prodnode1 ~]# . Please comment if you have any feedback/questions/suggestions. Happy Hadooping!! 🙂
... View more
Labels: