Member since
04-03-2019
962
Posts
1743
Kudos Received
146
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 14996 | 03-08-2019 06:33 PM | |
| 6178 | 02-15-2019 08:47 PM | |
| 5098 | 09-26-2018 06:02 PM | |
| 12591 | 09-07-2018 10:33 PM | |
| 7446 | 04-25-2018 01:55 AM |
11-13-2018
06:33 PM
I think you need to delete those files as well, then it works... [root@centos10 krb5kdc]# ll total 28 -rw------- 1 root root 29 Nov 13 09:36 kadm5.acl -rw------- 1 root root 29 Nov 13 09:24 kadm5.acl.rpmsave -rw------- 1 root root 29 Nov 13 09:36 kadm5.acly -rw------- 1 root root 448 Nov 13 09:35 kdc.conf -rw------- 1 root root 448 Nov 13 09:24 kdc.conf.rpmsave -rw------- 1 root root 8192 Nov 13 09:27 principal <<<<<<<<<<<<<<<<< -rw------- 1 root root 0 Nov 13 09:37 principal.ok<<<<<<<<<<<<<<<<< then it works [root@centos10 ~]# /usr/sbin/kdb5_util create -r BEER.LOC -s Loading random data Initializing database '/var/kerberos/krb5kdc/principal' for realm 'BEER.LOC', master key name 'K/M@BEER.LOC' You will be prompted for the database Master Password. It is important that you NOT FORGET this password. Enter KDC database master key: Re-enter KDC database master key to verify: [root@centos10 ~]#
... View more
11-01-2018
01:01 PM
Hi Kuldeep, I updated the MySQL connector jar to mysql-connector-java-5.1.41-bin.jar. HDP: HDP-2.6.5.0 MySQL: mysql Ver 14.14 Distrib 5.1.73 Performed the above steps and restarted ambari-server, ambari-agent, hiveserver2, and hive metastore components. However, I am still getting the same error in the logs. jdbc:hive2://hdpmaster1-dev.<domain>.c> show databases; Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Got exception: org.apache.hadoop.hive.metastore.api.MetaException javax.jdo.JDODataStoreException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_SELECT_LIMIT=DEFAULT' at line 1 at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543) at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:388) at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:213) at org.apache.hadoop.hive.metastore.ObjectStore.getDatabases(ObjectStore.java:826) at org.apache.hadoop.hive.metastore.ObjectStore.getAllDatabases(ObjectStore.java:842) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:103) at com.sun.proxy.$Proxy8.getAllDatabases(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_all_databases(HiveMetaStore.java:1270)
... View more
12-10-2016
08:55 AM
5 Kudos
This has been observed for Ambari 1.7. I know Ambari 1.7 is very old now however if some people are still using it and facing same issue then this post can save your lot of time! 🙂 . We know that Ambari 1.7 has Ganglia for reporting metrics. . Our issue was, we were unable to get service metrics for YARN service, below was the exact scenario: 1. In Ganglia UI, I was able to get graphs for the YARN Metrics. 2. In Ambari UI, I was able to see metrics for other services like HDFS etc. 3. This issue was observed on one of our customer's cluster. 4. I did setup same cluster on my local system however I was not able to reproduce this issue. 5. Only difference in customer's and my cluster was, customer was having RM HA and I had installed single node instance. 6. There was no error while getting metrics, please see below screenshot when metrics were not visible. . How to troubleshoot: 1. Click on any of the metrics, say 'Nodemanagers' --> This will open graph in big window. 2. Open the developer tools from Chrome/Firefox and inspect the network activities. 3. Notice the REST call from where its trying to fetch metrics. 4. Now do the RM failover and try to follow step 1-3 again. 5. Same REST call? No difference. 6. If you flip the RMs, Graphs will start populating the Data. . Root Cause: If you look at hadoop-metrics2.properties file, it has only one RM host(initial rm1) hardcoded for resourcemanager.sink.ganglia.servers resourcemanager.sink.ganglia.servers=rm1.crazyadmins.com:8664 . Workaround: Make same RM host as active RM which you get in REST API as a output of troubleshooting step 4. . Permanent fix: Edit /etc/hadoop/conf/hadoop-metrics2.properties file and add another RM host: E.g. resourcemanager.sink.ganglia.servers=rm1.crazyadmins.com:8664,rm2.crazyadmins.com:8664 Note - This file is not managed by Ambari 1.7. Feel free to modify on both the RM hosts and restart RMs via Ambari after the modifications. . Hope you enjoyed this article! Please comment if you have any questions. Happy Hadooping!! 🙂
... View more
Labels:
12-09-2016
08:01 PM
3 Kudos
@ANSARI FAHEEM AHMED I had written few blogs on performance tuning. Please have a look at below articles. http://crazyadmins.com/tune-hadoop-cluster-to-get-maximum-performance-part-1/ http://crazyadmins.com/tune-hadoop-cluster-to-get-maximum-performance-part-2/
... View more
12-11-2016
10:39 AM
@Dmitry Otblesk - Please turn off maintenance mode for HDFS to allow it to start with other services after reboot.
... View more
01-20-2017
02:46 PM
@Singh Pratap - Please accept appropriate answer.
... View more
12-11-2016
10:45 AM
Thanks @Kuldeep Kulkarni
This has cleared some doubts about how Hadoop services auto-login even though initial user TGT shows expired.
... View more
12-09-2016
09:32 PM
5 Kudos
@justlearning
You can use standard apache oozie examples and modify them as per your requirement. This is easiest way to get started writing Oozie workflows. It has example workflow.xml for each supported action. You can find Oozie examples on HDP cluster at below location(Provided you have installed Oozie client) /usr/hdp/current/oozie-client/doc/oozie-examples.tar.gz Hope this information helps!
... View more
12-06-2016
07:28 PM
5 Kudos
In previous post we have seen how to Automate HDP installation with Namenode HA using Ambari Blueprints. In this post, we will see how to Deploy single node HDP Cluster with Kerberos authentication via Ambari blueprint . Note - For Ambari 2.6.X onwards, we will have to register VDF to register internal repository, or else Ambari will pick up latest version of HDP and use the public repos. please see below document for more information. For Ambari version less than 2.6.X, this guide will work without any modifications. Document - https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-release-notes/content/ambari_relnotes-2.6.0.0-behavioral-changes.html . Below are simple steps to install HDP single node cluster with Kerberos Authentication(MIT KDC) using internal repository via Ambari Blueprints. . Step 1: Install Ambari server using steps mentioned under below link http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.1/bk_Installing_HDP_AMB/content/_download_the_ambari_repo_lnx6.html . Step 2: Register ambari-agent manually Install ambari-agent package on all the nodes in the cluster and modify hostname to ambari server host(fqdn) in /etc/ambari-agent/conf/ambari-agent.ini . Step 3: Install and configure MIT KDC Detailed Steps(Demo on HDP Sandbox 2.4): 3.1 Clone our github repository Ambari server in your HDP Cluster Note - This script will install and configure KDC on your Ambari Server. git clone https://github.com/crazyadmins/useful-scripts.git Sample Output: [root@sandbox ~]# git clone https://github.com/crazyadmins/useful-scripts.git
Initialized empty Git repository in /root/useful-scripts/.git/
remote: Counting objects: 29, done.
remote: Compressing objects: 100% (25/25), done.
remote: Total 29 (delta 4), reused 25 (delta 3), pack-reused 0
Unpacking objects: 100% (29/29), done.
2. Goto useful-scripts/ambari directory [root@sandbox ~]# cd useful-scripts/ambari/
[root@sandbox ambari]# ls -lrt
total 16
-rw-r--r-- 1 root root 5701 2016-04-23 20:33 setup_kerberos.sh
-rw-r--r-- 1 root root 748 2016-04-23 20:33 README
-rw-r--r-- 1 root root 366 2016-04-23 20:33 ambari.props
[root@sandbox ambari]# 3. Copy setup_only_kdc.sh and ambari.props to the host where you want to setup KDC Server 4. Edit and modify ambari.props file according to your cluster environment Note - In case of multinode cluster, Please don't forget to add comma separated list of hosts as a value of KERBEROS_CLIENTS variable(Not applicable for this post). Sample output for my Sandbox [root@sandbox ambari]# cat ambari.props
CLUSTER_NAME=Sandbox
AMBARI_ADMIN_USER=admin
AMBARI_ADMIN_PASSWORD=admin
AMBARI_HOST=sandbox.hortonworks.com
KDC_HOST=sandbox.hortonworks.com
REALM=HWX.COM
KERBEROS_CLIENTS=sandbox.hortonworks.com
##### Notes #####
#1. KERBEROS_CLIENTS - Comma separated list of Kerberos clients in case of multinode cluster
#2. Admin princial is admin/admin and password is hadoop
[root@sandbox ambari]# 5. Start installation by simply executing setup_only_kdc.sh Notes: 1. Please run setup_only_kdc.sh from KDC_HOST only, you don’t need to setup or configure KDC, this script will do everything for you. . Step 4: Configure blueprints Please follow below steps to create Blueprints . 4.1 Create hostmapping.json file as shown below: Note – This file will have information related to all the hosts which are part of your HDP cluster. {
"blueprint" : "hdptest",
"default_password" : "hadoop",
"host_groups" :[
{
"name" : "bluetest",
"hosts" : [
{
"fqdn" : "bluetest.openstacklocal"
}
]
}
],
"credentials" : [
{
"alias" : "kdc.admin.credential",
"principal" : "admin/admin",
"key" : "hadoop",
"type" : "TEMPORARY"
}
],
"security" : {
"type" : "KERBEROS"
},
"Clusters" : {"cluster_name":"kerberosCluster"}
} 4.2 Create cluster_configuration.json file, it contents mapping of hosts to HDP components {
"configurations": [{
"kerberos-env": {
"properties_attributes": {},
"properties": {
"realm": "HWX.COM",
"kdc_type": "mit-kdc",
"kdc_host": "bluetest.openstacklocal",
"admin_server_host": "bluetest.openstacklocal"
}
}
}, {
"krb5-conf": {
"properties_attributes": {},
"properties": {
"domains": "HWX.COM",
"manage_krb5_conf": "true"
}
}
}],
"host_groups": [{
"name": "bluetest",
"components": [{
"name": "NAMENODE"
}, {
"name": "NODEMANAGER"
}, {
"name": "DATANODE"
}, {
"name": "ZOOKEEPER_CLIENT"
}, {
"name": "HDFS_CLIENT"
}, {
"name": "YARN_CLIENT"
}, {
"name": "MAPREDUCE2_CLIENT"
}, {
"name": "ZOOKEEPER_SERVER"
}, {
"name": "SECONDARY_NAMENODE"
}, {
"name": "RESOURCEMANAGER"
}, {
"name": "APP_TIMELINE_SERVER"
}, {
"name": "HISTORYSERVER"
}],
"cardinality": 1
}],
"Blueprints": {
"blueprint_name": "hdptest",
"stack_name": "HDP",
"stack_version": "2.4",
"security": {
"type": "KERBEROS"
}
}
} . Step 5: Create an internal repository map . 5.1: hdp repository – copy below contents, modify base_url to add hostname/ip-address of your internal repository server and save it in repo.json file. {
"Repositories" : {
"base_url" : "http://172.26.64.249/hdp/centos6/HDP-2.4.2.0/",
"verify_base_url" : true
}
} . 5.2: hdp-utils repository – copy below contents, modify base_url to add hostname/ip-address of your internal repository server and save it in hdputils-repo.json file. {
"Repositories" : {
"base_url" : "http://172.26.64.249/hdp/centos6/HDP-UTILS-1.1.0.20/",
"verify_base_url" : true
}
} . Step 6: Register blueprint with Ambari server by executing below command curl -H "X-Requested-By: ambari" -X POST -u admin:admin http://<ambari-server>:8080/api/v1/blueprints/multinode-hdp -d @cluster_config.json . Step 7: Setup Internal repo via REST API. Execute below curl calls to setup internal repositories. curl -H "X-Requested-By: ambari"-X PUT -u admin:admin http://<ambari-server-hostname>:8080/api/v1/stacks/HDP/versions/2.4/operating_systems/redhat6/repositories/HDP-2.4 -d @repo.json
curl -H "X-Requested-By: ambari"-X PUT -u admin:admin http://<ambari-server-hostname>:8080/api/v1/stacks/HDP/versions/2.4/operating_systems/redhat6/repositories/HDP-UTILS-1.1.0.20 -d @hdputils-repo.json . Step 8: Pull the trigger! Below command will start cluster installation. curl -H "X-Requested-By: ambari" -X POST -u admin:admin http://<ambari-server-hostname>:8080/api/v1/clusters/multinode-hdp -d @hostmap.json . Please refer Next Part for Automated HDP installation using Ambari blueprint with Kerberos authentication for multi-node cluster. . Please feel free to comment if you need any further help on this. Happy Hadooping!!
... View more
Labels:
12-01-2016
12:27 PM
3 Kudos
After completing Ambari upgrade from Ambari 1.7 to Ambari 2.0.2, Ambari failed to start because of below error(s): [root@xyz ~]# service ambari-server start
Using python /usr/bin/python2.6
Starting ambari-server
Ambari Server running with administrator privileges.
Organizing resource files at /var/lib/ambari-server/resources...
Server PID at: /var/run/ambari-server/ambari-server.pid
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
Waiting for server start....................
ERROR: Exiting with exit code -1.
REASON: Ambari Server java process died with exitcode 255. Check /var/log/ambari-server/ambari-server.out for more information. . If you check /var/log/ambari-server/ambari-server.out logs, you get below error(s): [root@xyz ~]# cat /var/log/ambari-server/ambari-server.out
[EL Warning]: metadata: 2016-12-01 10:01:23.436--ServerSession(306824177)--The reference column name [resource_type_id] mapped on the element [field permissions] does not correspond to a valid id or basic field/column on the mapping reference. Will use referenced column name as provided.
[EL Info]: 2016-12-01 10:01:24.412--ServerSession(306824177)--EclipseLink, version: Eclipse Persistence Services - 2.5.2.v20140319-9ad6abd
[EL Info]: connection: 2016-12-01 10:01:24.603--ServerSession(306824177)--file:/usr/lib/ambari-server/ambari-server-2.0.2.25.jar_ambari-server_url=jdbc:postgresql://localhost/ambari_user=ambari login successful . Note - Above error is complex and confusing. You should always check /var/log/ambari-server/ambari-server.log. In my case, I was getting below error in ambari-server.log org.apache.ambari.server.StackAccessException: Stack data, stackName=HDP, stackVersion=2.2, serviceName=SMARTSENSE
at org.apache.ambari.server.api.services.AmbariMetaInfo.getService(AmbariMetaInfo.java:497)
at org.apache.ambari.server.api.services.AmbariMetaInfo.getComponent(AmbariMetaInfo.java:265)
at org.apache.ambari.server.controller.utilities.DatabaseChecker.checkDBConsistency(DatabaseChecker.java:96)
at org.apache.ambari.server.controller.AmbariServer.run(AmbariServer.java:217)
at org.apache.ambari.server.controller.AmbariServer.main(AmbariServer.java:665) . Why this has happened? This is one of the Known issue - Sometimes stack component for SmartSense gets removed from Ambari DB after an upgrade. . How to resolve this? 1. Check what version of SmartSense you have installed on your cluster. 2. Download RPM from Customer Support portal 3. Run below command to re-add the SmartSense view and Stack to the Ambari DB. [root@xyz crazyadmins.com]# hst add-to-ambari
Enter SmartSense distributable path: /home/kuldeepk/smartsense-hst-1.2.1.0-161.x86_64.rpm
Added SmartSense service definition to Ambari 4. Restart 'ambari-server'. [root@xyz crazyadmins.com]# ambari-server restart
Using python /usr/bin/python
Restarting ambari-server
Using python /usr/bin/python
Stopping ambari-server
Ambari Server is not running
Using python /usr/bin/python
Starting ambari-server
Ambari Server running with administrator privileges.
Organizing resource files at /var/lib/ambari-server/resources...
Server PID at: /var/run/ambari-server/ambari-server.pid
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
Waiting for server start....................
Ambari Server 'start' completed successfully.
[root@prodnode1 ~]# . Please comment if you have any feedback/questions/suggestions. Happy Hadooping!! 🙂
... View more
Labels: