Member since
04-03-2019
962
Posts
1743
Kudos Received
146
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 15048 | 03-08-2019 06:33 PM | |
| 6190 | 02-15-2019 08:47 PM | |
| 5106 | 09-26-2018 06:02 PM | |
| 12623 | 09-07-2018 10:33 PM | |
| 7459 | 04-25-2018 01:55 AM |
07-26-2016
05:13 AM
2 Kudos
@oula.alshiekh@gmail.com alshiekh Yes! you can install hue on apache hadoop. Make sure to download correct repository and you will get the hue packages.
... View more
07-26-2016
04:40 AM
2 Kudos
@Sumit Agrawal Is this a single node cluster? I can see that you are getting error related to ssh while bootstrapping ambari agent. Is there any reason why are you using localhost.localdomain? do you have localhost.localdomain entry in your /etc/hosts file which is pointing to 127.0.0.1 ? I would suggest double check your /etc/hosts file.
... View more
07-26-2016
04:35 AM
3 Kudos
@Sai ram There are no direct variables to get start time and end time in oozie. '${timestamp()}' can return current timestamp in UTC. I would suggest to use another approach, start and end your oozie workflow with an email action, in this way, you will receive 2 emails, one when your oozie workflow will start and another when your job will end. If you want more better solution then I would suggest to write a script which will fetch start time, end time from oozie commands or from oozie database and have script to send an email for multiple jobs at once in a tabular format. Hope this information helps.
... View more
07-26-2016
04:24 AM
@Obaid Salikeen - Please accept the answer if it helped you.
... View more
07-26-2016
04:16 AM
11 Kudos
In previous post we have seen how to install single node HDP cluster using Ambari Blueprints. In this post we will see how to Automate HDP installation using Ambari Blueprints. . Note - For Ambari 2.6.X onwards, we will have to register VDF to register internal repository, or else Ambari will pick up latest version of HDP and use the public repos. please see below document for more information. For Ambari version less than 2.6.X, this guide will work without any modifications. Document - https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-release-notes/content/ambari_relnotes-2.6.0.0-behavioral-changes.html . Below are simple steps to install HDP multinode cluster using internal repository via Ambari Blueprints. . Step 1: Install Ambari server using steps mentioned under below link http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.1/bk_Installing_HDP_AMB/content/_download_the_ambari_repo_lnx6.html . Step 2: Register ambari-agent manually Install ambari-agent package on all the nodes in the cluster and modify hostname to ambari server host(fqdn) in /etc/ambari-agent/conf/ambari-agent.ini . Step 3: Configure blueprints Please follow below steps to create Blueprints . 3.1 Create hostmapping.json file as shown below: Note – This file will have information related to all the hosts which are part of your HDP cluster. {
"blueprint" : "multinode-hdp",
"default_password" : "hadoop",
"host_groups" :[
{
"name" : "host2",
"hosts" : [
{
"fqdn" : "host2.crazyadmins.com"
}
]
},
{
"name" : "host3",
"hosts" : [
{
"fqdn" : "host3.crazyadmins.com"
}
]
},
{
"name" : "host4",
"hosts" : [
{
"fqdn" : "host4.crazyadmins.com"
}
]
}
]
} . 3.2 Create cluster_configuration.json file, it contents mapping of hosts to HDP components {
"configurations": [],
"host_groups": [{
"name": "host2",
"components": [{
"name": "PIG"
}, {
"name": "METRICS_COLLECTOR"
}, {
"name": "KAFKA_BROKER"
}, {
"name": "HISTORYSERVER"
}, {
"name": "HBASE_REGIONSERVER"
}, {
"name": "OOZIE_CLIENT"
}, {
"name": "HBASE_CLIENT"
}, {
"name": "NAMENODE"
}, {
"name": "SUPERVISOR"
}, {
"name": "HCAT"
}, {
"name": "METRICS_MONITOR"
}, {
"name": "APP_TIMELINE_SERVER"
}, {
"name": "NODEMANAGER"
}, {
"name": "HDFS_CLIENT"
}, {
"name": "HIVE_CLIENT"
}, {
"name": "FLUME_HANDLER"
}, {
"name": "DATANODE"
}, {
"name": "WEBHCAT_SERVER"
}, {
"name": "ZOOKEEPER_CLIENT"
}, {
"name": "ZOOKEEPER_SERVER"
}, {
"name": "STORM_UI_SERVER"
}, {
"name": "HIVE_SERVER"
}, {
"name": "FALCON_CLIENT"
}, {
"name": "TEZ_CLIENT"
}, {
"name": "HIVE_METASTORE"
}, {
"name": "SQOOP"
}, {
"name": "YARN_CLIENT"
}, {
"name": "MAPREDUCE2_CLIENT"
}, {
"name": "NIMBUS"
}, {
"name": "DRPC_SERVER"
}],
"cardinality": "1"
}, {
"name": "host3",
"components": [{
"name": "ZOOKEEPER_SERVER"
}, {
"name": "OOZIE_SERVER"
}, {
"name": "SECONDARY_NAMENODE"
}, {
"name": "FALCON_SERVER"
}, {
"name": "ZOOKEEPER_CLIENT"
}, {
"name": "PIG"
}, {
"name": "KAFKA_BROKER"
}, {
"name": "OOZIE_CLIENT"
}, {
"name": "HBASE_REGIONSERVER"
}, {
"name": "HBASE_CLIENT"
}, {
"name": "HCAT"
}, {
"name": "METRICS_MONITOR"
}, {
"name": "FALCON_CLIENT"
}, {
"name": "TEZ_CLIENT"
}, {
"name": "SQOOP"
}, {
"name": "HIVE_CLIENT"
}, {
"name": "HDFS_CLIENT"
}, {
"name": "NODEMANAGER"
}, {
"name": "YARN_CLIENT"
}, {
"name": "MAPREDUCE2_CLIENT"
}, {
"name": "DATANODE"
}],
"cardinality": "1"
}, {
"name": "host4",
"components": [{
"name": "ZOOKEEPER_SERVER"
}, {
"name": "ZOOKEEPER_CLIENT"
}, {
"name": "PIG"
}, {
"name": "KAFKA_BROKER"
}, {
"name": "OOZIE_CLIENT"
}, {
"name": "HBASE_MASTER"
}, {
"name": "HBASE_REGIONSERVER"
}, {
"name": "HBASE_CLIENT"
}, {
"name": "HCAT"
}, {
"name": "RESOURCEMANAGER"
}, {
"name": "METRICS_MONITOR"
}, {
"name": "FALCON_CLIENT"
}, {
"name": "TEZ_CLIENT"
}, {
"name": "SQOOP"
}, {
"name": "HIVE_CLIENT"
}, {
"name": "HDFS_CLIENT"
}, {
"name": "NODEMANAGER"
}, {
"name": "YARN_CLIENT"
}, {
"name": "MAPREDUCE2_CLIENT"
}, {
"name": "DATANODE"
}],
"cardinality": "1"
}],
"Blueprints": {
"blueprint_name": "multinode-hdp",
"stack_name": "HDP",
"stack_version": "2.3"
}
} . Step 4: Create an internal repository map . 4.1: hdp repository – copy below contents, modify base_url to add hostname/ip-address of your internal repository server and save it in repo.json file. {
"Repositories" : {
"base_url" : "http://<ip-address-of-repo-server>/hdp/centos6/HDP-2.3.4.0",
"verify_base_url" : true
}
} . 4.2: hdp-utils repository – copy below contents, modify base_url to add hostname/ip-address of your internal repository server and save it in hdputils-repo.json file. {
"Repositories" : {
"base_url" : "http://<ip-address-of-repo-server>/hdp/centos6/HDP-UTILS-1.1.0.20",
"verify_base_url" : true
}
} . Step 5: Register blueprint with ambari server by executing below command curl -H "X-Requested-By: ambari" -X POST -u admin:admin http://<ambari-server-hostname>:8080/api/v1/blueprints/multinode-hdp -d @cluster_config.json . Step 6: Setup Internal repo via REST API. Execute below curl calls to setup internal repositories. curl -H "X-Requested-By: ambari" -X PUT -u admin:admin http://<ambari-server-hostname>:8080/api/v1/stacks/HDP/versions/2.3/operating_systems/redhat6/repositories/HDP-2.3 -d @repo.json
curl -H "X-Requested-By: ambari" -X PUT -u admin:admin http://<ambari-server-hostname>:8080/api/v1/stacks/HDP/versions/2.3/operating_systems/redhat6/repositories/HDP-UTILS-1.1.0.20 -d @hdputils-repo.json . Step 7: Pull the trigger! Below command will start cluster installation. curl -H "X-Requested-By: ambari" -X POST -u admin:admin http://<ambari-server-hostname>:8080/api/v1/clusters/multinode-hdp -d @hostmap.json . Note - Please refer third part of this tutorial if you want to setup a multinode cluster with Namenode HA . Please feel free to comment if you need any further help on this. Happy Hadooping!!
... View more
Labels:
07-26-2016
04:09 AM
16 Kudos
What are Ambari Blueprints ? Ambari Blueprints are definition of your HDP cluster in “JSON” format, it contents information about all the hosts in your cluster, their components, mapping of stack components with each hosts or hostgroups and other cool stuff. Using Blueprints we can call Ambari APIs to completely automate HDP installation process. Interesting stuff, isn’t it ? Lets get started with single node cluster installation. Below are the steps to setup single-node HDP cluster with Ambari Blueprints. . Note - For Ambari 2.6.X onwards, we will have to register VDF to register internal repository, or else Ambari will pick up latest version of HDP and use the public repos. please see below document for more information. For Ambari version less than 2.6.X, this guide will work without any modifications. Document - https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-release-notes/content/ambari_relnotes-2.6.0.0-behavioral-changes.html . Step 1: Install Ambari server using steps mentioned under below link http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.1/bk_Installing_HDP_AMB/content/_download_the_ambari_repo_lnx6.html . Step 2: Register ambari-agent manually Install ambari-agent package on all the nodes in the cluster and modify hostname to ambari server host(fqdn) in /etc/ambari-agent/conf/ambari-agent.ini . Step 3: Configure blueprints Please follow below steps to create Blueprints 3.1 Create hostmapping.json file as shown below: {
"blueprint" : "single-node-hdp-cluster",
"default_password" : "admin",
"host_groups" :[
{
"name" : "host_group_1",
"hosts" : [
{
"fqdn" : "<fqdn-of-single-node-cluster-machine>"
}
]
}
]
} . 3.2 Create cluster_configuration.json file, it contents mapping of hosts to HDP components {
"configurations" : [ ],
"host_groups" : [
{
"name" : "host_group_1",
"components" : [
{
"name" : "NAMENODE"
},
{
"name" : "SECONDARY_NAMENODE"
},
{
"name" : "DATANODE"
},
{
"name" : "HDFS_CLIENT"
},
{
"name" : "RESOURCEMANAGER"
},
{
"name" : "NODEMANAGER"
},
{
"name" : "YARN_CLIENT"
},
{
"name" : "HISTORYSERVER"
},
{
"name" : "APP_TIMELINE_SERVER"
},
{
"name" : "MAPREDUCE2_CLIENT"
},
{
"name" : "ZOOKEEPER_SERVER"
},
{
"name" : "ZOOKEEPER_CLIENT"
}
],
"cardinality" : "1"
}
],
"Blueprints" : {
"blueprint_name" : "single-node-hdp-cluster",
"stack_name" : "HDP",
"stack_version" : "2.3"
}
} . Step 4: Register blueprint with ambari server by executing below command curl -H "X-Requested-By: ambari" -X POST -u admin:admin http://<ambari-hostname>:8080/api/v1/blueprints/<blueprint-name>; -d @cluster_configuration.json . Srep 5: Pull the trigger! Below command will start cluster installation. curl -H "X-Requested-By: ambari" -X POST -u admin:admin http://<ambari-host>:8080/api/v1/clusters/<new-cluster-name>; -d @hostmapping.json . Step 6: We can track installation status by below REST call or we can check the same from ambari UI curl -H "X-Requested-By: ambari" -X GET -u admin:admin http://<ambari-hostname>:8080/api/v1/clusters/mycluster/requests/ curl -H "X-Requested-By: ambari" -X GET -u admin:admin http://<ambari-hostname>:8080/api/v1/clusters/mycluster/requests/<request-number>; . Thank you for your time! Please read next part to see installation of HDP multinode cluster using Ambari Blueprints. . Happy Hadooping!! 🙂
... View more
Labels:
07-22-2016
12:22 AM
2 Kudos
@Wing Lo You can use ktutil utility for creating keytabs in windows. Please refer below link for more details http://www.itadmintools.com/2011/07/creating-kerberos-keytab-files.html
... View more
07-22-2016
12:12 AM
@Aman Poonia - Can you please elaborate your question? are you trying to say - can we configure 'n' number of nodes as namenode in ambari?
... View more
07-21-2016
11:40 PM
2 Kudos
This tutorial has been successfully tried on HDP-2.4.2.0 and Ambari 2.2.2.0 . I have my HDP Cluster Kerberized with Namenode HA. . Please follow below steps for Configuring File View on Kerberized HDP Cluster. . Step 1 - Please configure your Ambari Server for Kerberos with the steps mentioned in below article. Please follow steps 1 to 5. https://community.hortonworks.com/articles/40635/configure-tez-view-for-kerberized-hdp-cluster.html . Step 2 - Please add below properties to core-site.xml via Ambari UI and restart required services. . Note - If you are running Ambari Server as root user then add below properties hadoop.proxyuser.root.groups=*
hadoop.proxyuser.root.hosts=* . If you are running Ambari server as non-root user then please add below properties in core-site.xml hadoop.proxyuser.<ambari-server-user>.groups=*
hadoop.proxyuser.<ambari-server-user>.hosts=* Please replace <ambari-server-user> with user running Ambari Server in above example. . I'm assuming that your ambari server principal is ambari-server@REALM.COM, if not then please replace 'ambari-server' with your principal's user part. hadoop.proxyuser.ambari-server.groups=*
hadoop.proxyuser.ambari-server.hosts=* . Step 3 - Create user directory on hdfs for the user accessing file view. For e.g. in my case I'm using admin user to access file view. . sudo -u hdfs hadoop fs -mkdir /user/admin
sudo -u hdfs hadoop fs -chown admin:hdfs /user/admin
sudo -u hdfs hadoop fs -chmod 755 /user/admin . Step 4 - Goto Admin tab --> Click on Manage Ambari --> Views --> Edit File view ( Create a new one if it doesn't exist already ) and configure settings as given below . Note - You may need to modify values as per your environment settings! . . After above steps, you should be able to access your file view without any issues. If you receive any error(s) then please check /var/log/ambari-server/ambari-server.log for more details and troubleshooting. . . Please comment if you have any feedback/questions/suggestions. Happy Hadooping!!
... View more
Labels: