Member since
04-03-2019
962
Posts
1743
Kudos Received
146
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 15005 | 03-08-2019 06:33 PM | |
| 6178 | 02-15-2019 08:47 PM | |
| 5098 | 09-26-2018 06:02 PM | |
| 12600 | 09-07-2018 10:33 PM | |
| 7446 | 04-25-2018 01:55 AM |
10-24-2016
02:57 PM
4 Kudos
@Nikita Kiselev Can you please double check your principal name? By looking at below message, it looks like we have wrong hostname part in the principal or some typo etc. Mechanism level:Servernot found inKerberos database (7)) Also, Try to do kinit using hive service keytab and see if it works?
... View more
10-24-2016
02:49 PM
2 Kudos
@Gary Cameron Can you please retry below command and then click on the test connection again. ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar Please also check ambari-server logs(/var/log/ambari-server/ambari-server.log) to see if you are getting any hint.
... View more
10-24-2016
01:36 PM
6 Kudos
Below are the steps for Oozie database migration from Derby to Mysql. Step 1 - Have MySQL server installed and ready to be configured.
. Step 2 - Stop Oozie service from Ambari UI.
. Step 3 - On Ambari Server, run below command ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar
Note - Please pass appropriate driver if /usr/share/java/mysql-connector-java.jar does not exists. .
. Step 4 - Login to Mysql server as root user and create a blank 'oozie' database and grant required permissions to the 'oozie' user. #create database oozie; #grant all privileges on oozie.* to 'oozie’@‘<oozie-server>' identified by 'oozie’;
Note - 'oozie' is your oozie database password, if you want you can change it in above command .
. Step 5 - Add mysql database server details in Oozie configuration via Ambari UI. . Step 6 - Make sure that you have mysql connector jar under Oozie libext (/usr/hdp/<version>/oozie/libext/mysql-connector-java.jar), if not then copy it from available machine.
. Step 7 - Prepare Oozie war file /usr/hdp/<version>/oozie/bin/oozie-setup.sh prepare-war
Note - Run above command on oozie server as oozie user.
. Step 8 - Prepare Oozie schema using below command (Run below command on Oozie host as oozie user) /usr/hdp/<version>/oozie/bin/oozie-setup.sh db create -run
Please note - Above steps does not have information to migrate historical data from Derby to Mysql. Basically Oozie stores workflows/coordinator configuration related information as MEDIUMBLOB datatype and there is no straightforward way to convert these information in mysql compatible format.
. If you take a sql dump from Derby and import directly into mysql, Oozie server will start, you will see all historical data however whenever Oozie will try to change the coordinator action state/workflow action state, you might see parsing issues like below(because of incorrect BLOBs in Mysql): 2016-08-22 06:48:19,036 WARN CoordMaterializeTransitionXCommand:523 - SERVER[oozienode2.openstacklocal] USER[root] GROUP[-] TOKEN[] APP[test] JOB[0000003-160822025623541-oozie-oozi-C] ACTION[-] Configuration parse error. read from DB :4f424a00000001000000010005636f6465630002677a1f8b0800000000000000c5945d4bc3301486ef05ffc32ebc6d9276acce520afe8121526fbc8be9a9ab4b73e269639de27f37eb9c30d6099d8237219fcffbe6cd47aad094d5a323d95668b2f3b3c924b58416a85df72ddf36b286ac5e378a2adbdec87699f2beeb6bf84java.io.IOException: org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 1; Content is not allowed in prolog. at org.apache.oozie.util.XConfiguration.parse(XConfiguration.java:289) at org.apache.oozie.util.XConfiguration.<init>(XConfiguration.java:80) at Happy Hadooping!! Please comment your feedback or questions in the comment section.
... View more
Labels:
10-24-2016
09:10 AM
4 Kudos
@ranjith AFAIK, We cannot set root password for mysql using Ambari blueprints. Setting root password requires mysqladmin etc. mysql commands and with Ambari blueprints, we can only set mysql password for daemon users for metastore databases like Oozie/Hive etc.
... View more
10-14-2016
10:16 AM
7 Kudos
@Sarah Maadawy I think you are passing incorrect principal name while doing kinit. Can you please do: klist -ket /etc/security/keytabs/hdfs.headless.keytab Sample output: [root@ambarangerdap1 ~]# klist -ket /etc/security/keytabs/hdfs.headless.keytab
Keytab name: FILE:/etc/security/keytabs/hdfs.headless.keytab
KVNO Timestamp Principal
---- ----------------- --------------------------------------------------------
1 09/25/16 07:17:02 hdfs-ambari-sme@SUPPORT.COM (arcfour-hmac)
1 09/25/16 07:17:02 hdfs-ambari-sme@SUPPORT.COM (des-cbc-md5)
1 09/25/16 07:17:02 hdfs-ambari-sme@SUPPORT.COM (des3-cbc-sha1)
1 09/25/16 07:17:02 hdfs-ambari-sme@SUPPORT.COM (aes128-cts-hmac-sha1-96)
1 09/25/16 07:17:02 hdfs-ambari-sme@SUPPORT.COM (aes256-cts-hmac-sha1-96) Now from above output I can see that my hdfs principal is hdfs-ambari-sme@SUPPORT.COM I would use below command kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-ambari-sme Please try this and let me know if this helps! HCC is always there to help you. Happy Hadooping! 🙂
... View more
10-13-2016
08:10 AM
7 Kudos
SYMPTOM Oozie web UI doesn't work on Internet explorer in Kerberized environment, it gives below error message even after making sure that Ext JS libraries are installed correctly. Oozie web console is disabled.To enable Oozie web console install the Ext JS library. ROOT CAUSE https://issues.apache.org/jira/browse/OOZIE-2322 - Because of empty space as a value of oozie.authentication.cookie.domain WORKAROUND 1. Set value of oozie.authentication.cookie.domain to name of your corporate domain in oozie-site.xml via Ambari UI. Example: <property>
<name>oozie.authentication.cookie.domain</name>
<value>hortonworks.com</value>
</property> 2. Stop Oozie server via Ambari. 3. Move oozie.war to some other location. mv /usr/hdp/current/oozie-server/oozie-server/webapps/oozie.war /root/ 4. Move Oozie webapp directory to some other location. mv /usr/hdp/current/oozie-server/oozie-server/webapps/oozie /root/ 5. Regenrate Oozie war file using below command /usr/hdp/current/oozie-server/bin/oozie-setup.sh prepare-war 6. Start Oozie server via Ambari. 7. Clear the cookies from your browser and access Oozie WebUI. RESOLUTION This issue has been fixed in Oozie 4.3.0
... View more
Labels:
10-13-2016
08:05 AM
@Gerd Koenig - Next tutorial is up - Blueprint setup with NN HA - https://community.hortonworks.com/articles/61358/automate-hdp-installation-using-ambari-blueprints-2.html
... View more
10-13-2016
08:02 AM
7 Kudos
In previous post we have seen how to install multi node HDP cluster using Ambari Blueprints. In this post we will see how to Automate HDP installation with Namenode HA using Ambari Blueprints. . Note - For Ambari 2.6.X onwards, we will have to register VDF to register internal repository, or else Ambari will pick up latest version of HDP and use the public repos. please see below document for more information. For Ambari version less than 2.6.X, this guide will work without any modifications. Document - https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-release-notes/content/ambari_relnotes-2.6.0.0-behavioral-changes.html . Below are simple steps to install HDP multinode cluster with Namenode HA using internal repository via Ambari Blueprints. . Step 1: Install Ambari server using steps mentioned under below link http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.1/bk_Installing_HDP_AMB/content/_download_the_ambari_repo_lnx6.html . Step 2: Register ambari-agent manually Install ambari-agent package on all the nodes in the cluster and modify hostname to ambari server host(fqdn) in /etc/ambari-agent/conf/ambari-agent.ini . Step 3: Configure blueprints Please follow below steps to create Blueprints . 3.1 Create hostmapping.json file as shown below: Note – This file will have information related to all the hosts which are part of your HDP cluster. {
"blueprint" : "prod",
"default_password" : "hadoop",
"host_groups" :[
{
"name" : "prodnode1",
"hosts" : [
{
"fqdn" : "prodnode1.openstacklocal"
}
]
},
{
"name" : "prodnode2",
"hosts" : [
{
"fqdn" : "prodnode2.openstacklocal"
}
]
},
{
"name" : "prodnode3",
"hosts" : [
{
"fqdn" : "prodnode3.openstacklocal"
}
]
}
]
} . 3.2 Create cluster_configuration.json file, it contents mapping of hosts to HDP components {
"configurations" : [
{ "core-site": {
"properties" : {
"fs.defaultFS" : "hdfs://prod",
"ha.zookeeper.quorum" : "%HOSTGROUP::prodnode1%:2181,%HOSTGROUP::prodnode2%:2181,%HOSTGROUP::prodnode3%:2181"
}}
},
{ "hdfs-site": {
"properties" : {
"dfs.client.failover.proxy.provider.prod" : "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
"dfs.ha.automatic-failover.enabled" : "true",
"dfs.ha.fencing.methods" : "shell(/bin/true)",
"dfs.ha.namenodes.prod" : "nn1,nn2",
"dfs.namenode.http-address" : "%HOSTGROUP::prodnode1%:50070",
"dfs.namenode.http-address.prod.nn1" : "%HOSTGROUP::prodnode1%:50070",
"dfs.namenode.http-address.prod.nn2" : "%HOSTGROUP::prodnode3%:50070",
"dfs.namenode.https-address" : "%HOSTGROUP::prodnode1%:50470",
"dfs.namenode.https-address.prod.nn1" : "%HOSTGROUP::prodnode1%:50470",
"dfs.namenode.https-address.prod.nn2" : "%HOSTGROUP::prodnode3%:50470",
"dfs.namenode.rpc-address.prod.nn1" : "%HOSTGROUP::prodnode1%:8020",
"dfs.namenode.rpc-address.prod.nn2" : "%HOSTGROUP::prodnode3%:8020",
"dfs.namenode.shared.edits.dir" : "qjournal://%HOSTGROUP::prodnode1%:8485;%HOSTGROUP::prodnode2%:8485;%HOSTGROUP::prodnode3%:8485/prod",
"dfs.nameservices" : "prod"
}}
}],
"host_groups" : [
{
"name" : "prodnode1",
"components" : [
{
"name" : "NAMENODE"
},
{
"name" : "JOURNALNODE"
},
{
"name" : "ZKFC"
},
{
"name" : "NODEMANAGER"
},
{
"name" : "DATANODE"
},
{
"name" : "ZOOKEEPER_CLIENT"
},
{
"name" : "HDFS_CLIENT"
},
{
"name" : "YARN_CLIENT"
},
{
"name" : "FALCON_CLIENT"
},
{
"name" : "OOZIE_CLIENT"
},
{
"name" : "HIVE_CLIENT"
},
{
"name" : "MAPREDUCE2_CLIENT"
},
{
"name" : "ZOOKEEPER_SERVER"
}
],
"cardinality" : 1
},
{
"name" : "prodnode2",
"components" : [
{
"name" : "JOURNALNODE"
},
{
"name" : "MYSQL_SERVER"
},
{
"name" : "HIVE_SERVER"
},
{
"name" : "HIVE_METASTORE"
},
{
"name" : "WEBHCAT_SERVER"
},
{
"name" : "NODEMANAGER"
},
{
"name" : "DATANODE"
},
{
"name" : "ZOOKEEPER_CLIENT"
},
{
"name" : "ZOOKEEPER_SERVER"
},
{
"name" : "HDFS_CLIENT"
},
{
"name" : "YARN_CLIENT"
},
{
"name" : "FALCON_SERVER"
},
{
"name" : "OOZIE_SERVER"
},
{
"name" : "FALCON_CLIENT"
},
{
"name" : "OOZIE_CLIENT"
},
{
"name" : "HIVE_CLIENT"
},
{
"name" : "MAPREDUCE2_CLIENT"
}
],
"cardinality" : 1
},
{
"name" : "prodnode3",
"components" : [
{
"name" : "RESOURCEMANAGER"
},
{
"name" : "JOURNALNODE"
},
{
"name" : "ZKFC"
},
{
"name" : "NAMENODE"
},
{
"name" : "APP_TIMELINE_SERVER"
},
{
"name" : "HISTORYSERVER"
},
{
"name" : "NODEMANAGER"
},
{
"name" : "DATANODE"
},
{
"name" : "ZOOKEEPER_CLIENT"
},
{
"name" : "ZOOKEEPER_SERVER"
},
{
"name" : "HDFS_CLIENT"
},
{
"name" : "YARN_CLIENT"
},
{
"name" : "HIVE_CLIENT"
},
{
"name" : "MAPREDUCE2_CLIENT"
}
],
"cardinality" : 1
}
],
"Blueprints" : {
"blueprint_name" : "prod",
"stack_name" : "HDP",
"stack_version" : "2.4"
}
} Note - I have kept Namenodes on prodnode1 and prodnode3, you can change it according to your requirement. I have added few more services like Hive, Falcon, Oozie etc. You can remove them or add few more according to your requirement. . Step 4: Create an internal repository map . 4.1: hdp repository – copy below contents, modify base_url to add hostname/ip-address of your internal repository server and save it in repo.json file. {
"Repositories":{
"base_url":"http://<ip-address-of-repo-server>/hdp/centos6/HDP-2.4.2.0",
"verify_base_url":true
}
} . 4.2: hdp-utils repository – copy below contents, modify base_url to add hostname/ip-address of your internal repository server and save it in hdputils-repo.json file. {
"Repositories" : {
"base_url" : "http://<ip-address-of-repo-server>/hdp/centos6/HDP-UTILS-1.1.0.20",
"verify_base_url" : true
}
} Step 5: Register blueprint with ambari server by executing below command curl -H "X-Requested-By: ambari"-X POST -u admin:admin http://<ambari-server-hostname>:8080/api/v1/blueprints/multinode-hdp -d @cluster_config.json . Step 6: Setup Internal repo via REST API. Execute below curl calls to setup internal repositories. curl -H "X-Requested-By: ambari"-X PUT -u admin:admin http://<ambari-server-hostname>:8080/api/v1/stacks/HDP/versions/2.4/operating_systems/redhat6/repositories/HDP-2.4 -d @repo.json
curl -H "X-Requested-By: ambari"-X PUT -u admin:admin http://<ambari-server-hostname>:8080/api/v1/stacks/HDP/versions/2.4/operating_systems/redhat6/repositories/HDP-UTILS-1.1.0.20 -d @hdputils-repo.json . Step 7: Pull the trigger! Below command will start cluster installation. curl -H "X-Requested-By: ambari"-X POST -u admin:admin http://<ambari-server-hostname>:8080/api/v1/clusters/multinode-hdp -d @hostmap.json . Please refer Part-4 for setting up HDP with Kerberos authentication via Ambari blueprint. . Please feel free to comment if you need any further help on this. Happy Hadooping!!
... View more
Labels:
10-05-2016
11:15 AM
@mayki wogno Unfortunately, you cann't get job related information however you can guess the job from above information. Username - zazi Service principal - oozie/master003@fma.com Host from where file was accessed - 10.xx.224.9 Please accept the answer if this was helpful.
... View more
10-05-2016
09:25 AM
4 Kudos
@mayki wogno If you have Ranger configured then its very easy, just enable HDFS plugin and ranger will do this for you. More information on Ranger - http://hortonworks.com/apache/ranger/ Or if you don't have Ranger then you can check this by referring to hdfs audit logs. ( /var/log/hadoop/hdfs/hdfs-audit.log) Hope this information helps.
... View more