Member since
10-02-2015
50
Posts
12
Kudos Received
11
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1391 | 01-03-2018 04:12 PM | |
1110 | 01-03-2018 04:07 PM | |
1553 | 07-20-2017 02:18 PM | |
1840 | 06-22-2017 09:33 PM | |
1097 | 03-20-2017 02:57 PM |
02-08-2017
04:02 PM
Hi @Navdeep Singh, Did this resolve your Blueprint deployment issue? If so, can you please accept this answer, in case it might be useful for others as well? Thanks
... View more
02-06-2017
04:57 PM
Hi @Navdeep Singh, The problem you're experiencing is an issue with configuration. I tried out the Blueprint you've posted here with a local vagrant cluster, and there were no exceptions in the ambari-server log, which seems to indicate that the Blueprint deployment completed without any unexpected errors. I looked through the HBase logs, and noticed some exceptions when HBase attempts to connect to HDFS. Generally, HBase must be configured to point towards the HDFS Namenode being used. In your particular case, the configuration was pointing to a "localhost" address, which is incorrect, since the NameNodes in your cluster are not deployed on the same instance as either of the HBase Master components. If you add the following configuration block to your Blueprint, the deployment will work fine: {
"hbase-site" : {
"properties" : {
"hbase.rootdir" : "hdfs://hdpcluster/apps/hbase/data"
}
}
} Since your HDFS configuration defines a name service at "hdpcluster", this must be used in the HBase configuration that points to the root directory in HDFS used by HBase. I modified a local copy of your Blueprint with this change added, and the cluster deployed properly, and HBase started up as expected (Active and Standby nodes started, regsionservers started). For Blueprint deployments, the HDFS HA settings are not automatically set by the Blueprints processor for services/components that depend upon HDFS. This should probably be updated in a future version of Ambari. I've included a modified copy of your Blueprint below that seems to work fine now with the new configuration. Hope this helps. {
"configurations" : [
{
"hdfs-site" : {
"properties" : {
"dfs.namenode.https-address" : "%HOSTGROUP::master_1%:50470",
"dfs.client.failover.proxy.provider.hdpcluster" : "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
"dfs.namenode.rpc-address.hdpcluster.nn2" : "%HOSTGROUP::master_2%:8020",
"dfs.namenode.shared.edits.dir" : "qjournal://%HOSTGROUP::master_1%:8485;%HOSTGROUP::slave_1%:8485;%HOSTGROUP::master_2%:8485/hdpcluster",
"dfs.namenode.http-address.hdpcluster.nn2" : "%HOSTGROUP::master_2%:50070",
"dfs.namenode.http-address.hdpcluster.nn1" : "%HOSTGROUP::master_1%:50070",
"dfs.ha.fencing.methods" : "shell(/bin/true)",
"dfs.nameservices" : "hdpcluster",
"dfs.namenode.http-address" : "%HOSTGROUP::master_1%:50070",
"dfs.ha.namenodes.hdpcluster" : "nn1,nn2",
"dfs.namenode.https-address.hdpcluster.nn2" : "%HOSTGROUP::master_2%:50470",
"dfs.namenode.rpc-address.hdpcluster.nn1" : "%HOSTGROUP::master_1%:8020",
"dfs.namenode.https-address.hdpcluster.nn1" : "%HOSTGROUP::master_1%:50470",
"dfs.ha.automatic-failover.enabled" : "true"
}
}
},
{
"core-site" : {
"properties" : {
"fs.defaultFS" : "hdfs://hdpcluster",
"ha.zookeeper.quorum" : "%HOSTGROUP::master_1%:2181,%HOSTGROUP::master_2%:2181,%HOSTGROUP::slave_1%:2181"
}
}
},
{
"hbase-site" : {
"properties" : {
"hbase.rootdir" : "hdfs://hdpcluster/apps/hbase/data"
}
}
}
],
"host_groups" : [
{
"components" : [
{
"name" : "HDFS_CLIENT"
},
{
"name" : "MAPREDUCE2_CLIENT"
},
{
"name" : "YARN_CLIENT"
},
{
"name" : "ZOOKEEPER_SERVER"
},
{
"name" : "HISTORYSERVER"
},
{
"name" : "TEZ_CLIENT"
},
{
"name" : "ZKFC"
},
{
"name" : "JOURNALNODE"
},
{
"name" : "NAMENODE"
},
{
"name" : "ZOOKEEPER_CLIENT"
},
{
"name" : "METRICS_MONITOR"
}
],
"configurations" : [ ],
"name" : "master_1",
"cardinality" : "1"
},
{
"components" : [
{
"name" : "NAMENODE"
},
{
"name" : "TEZ_CLIENT"
},
{
"name" : "RESOURCEMANAGER"
},
{
"name" : "MAPREDUCE2_CLIENT"
},
{
"name" : "ZKFC"
},
{
"name" : "YARN_CLIENT"
},
{
"name" : "METRICS_GRAFANA"
},
{
"name" : "ZOOKEEPER_CLIENT"
},
{
"name" : "ZOOKEEPER_SERVER"
},
{
"name" : "METRICS_MONITOR"
},
{
"name" : "APP_TIMELINE_SERVER"
},
{
"name" : "HDFS_CLIENT"
},
{
"name" : "JOURNALNODE"
},
{
"name" : "METRICS_COLLECTOR"
}
],
"configurations" : [ ],
"name" : "master_2",
"cardinality" : "1"
},
{
"components" : [
{
"name" : "ZOOKEEPER_CLIENT"
},
{
"name" : "YARN_CLIENT"
},
{
"name" : "DATANODE"
},
{
"name" : "HBASE_MASTER"
},
{
"name" : "MAPREDUCE2_CLIENT"
},
{
"name" : "HBASE_REGIONSERVER"
},
{
"name" : "METRICS_MONITOR"
},
{
"name" : "HBASE_CLIENT"
},
{
"name" : "HDFS_CLIENT"
},
{
"name" : "TEZ_CLIENT"
},
{
"name" : "NODEMANAGER"
}
],
"configurations" : [ ],
"name" : "slave_2",
"cardinality" : "1"
},
{
"components" : [
{
"name" : "HDFS_CLIENT"
},
{
"name" : "HBASE_CLIENT"
},
{
"name" : "YARN_CLIENT"
},
{
"name" : "HBASE_MASTER"
},
{
"name" : "TEZ_CLIENT"
},
{
"name" : "NODEMANAGER"
},
{
"name" : "ZOOKEEPER_CLIENT"
},
{
"name" : "DATANODE"
},
{
"name" : "ZOOKEEPER_SERVER"
},
{
"name" : "METRICS_MONITOR"
},
{
"name" : "JOURNALNODE"
},
{
"name" : "HBASE_REGIONSERVER"
},
{
"name" : "MAPREDUCE2_CLIENT"
}
],
"configurations" : [ ],
"name" : "slave_1",
"cardinality" : "1"
}
],
"settings" : [ ],
"Blueprints" : {
"blueprint_name" : "blueprint992c1c9a-b38d-4a8f-bb6f-b4e45f7f447d",
"stack_name" : "HDP",
"stack_version" : "2.5",
"security" : {
"type" : "NONE"
}
}
}
... View more
01-17-2017
05:43 PM
1 Kudo
Hi @Kuldeep Kulkarni, Generally, it is good practice to avoid setting passwords in the Blueprint document. We generally recommend that any passwords be configured in the Cluster Creation Template document, which is POST-ed to actually create the cluster, based on a given Blueprint. Since the Cluster Creation Template is not persisted by Ambari, it is usually the best place to configure passwords. While not a perfect solution, since the document still includes the passwords in clear text, it does have the advantage of keeping the passwords out of the Blueprint, which is persisted by Ambari, and is available via the REST API (although the "secret reference" feature usually guarantees that the passwords are not available to a REST client). Hope this helps.
... View more
12-20-2016
10:15 PM
Hi @wbu, @Alejandro Fernandez's and @smagyari's are both correct. The main problem is that SmartSense's Ambari Stack definitions are not included in the default stack definitions. Generally, the configuration properties that are passwords are marked with metadata to indicate that a given property is of type "PASSWORD". This metadata is used by the Blueprints processor in order to determine which properties can be set with the value of "default_password", which is set in the Cluster Creation Template. In the current release (Ambari 2.4), the only way to resolve this would be to use @smagyari's recommendation, and set the password directly. Generally, we recommend that passwords only be included in the Cluster Creation Template, since Ambari does not persist that document. Hope this helps!
... View more
08-26-2016
07:34 PM
Hi @Raghu Udiyar, While the Blueprint POST-ed to Ambari will not change as the state of a cluster changes, you can consider "exporting" a Blueprint from a live cluster. After a set of changes to the cluster (Config changes, services added/deleted, etc), exporting the Blueprint will provide a Blueprint that describes the current state of the cluster, which would be different than the original Blueprint used to create the cluster. Based on what I've seen in this issue, it looks like you could use the Blueprint export feature to maintain a Blueprint of the current changes, so that you could always recreate this cluster layout on different hardware if necessary. Here's a link to the Blueprints documentation on Blueprint exports: https://cwiki.apache.org/confluence/display/AMBARI/Blueprints#Blueprints-APIResourcesandSyntax Blueprints can also be used to create HA-based clusters from scratch. That feature has been present since Ambari 2.0, and a link to more documentation on this can be found at: https://cwiki.apache.org/confluence/display/AMBARI/Blueprint+Support+for+HA+Clusters
... View more
04-08-2016
01:13 PM
2 Kudos
Hi @Jan Andreev, I'm not exactly sure why you're seeing the issue with automatic registration, but if you're just trying to get things up and running manually, you can probably just modify the ambari-agent config files. You manual setup step may have failed if the agents are not configured to point to the ambari-server instance in your cluster. You mentioned that the registration doesn't fail on the ambari-server node, and that makes sense, since the default hostname pointer is "localhost". For manual configuration, you can just set this up in the ambari-agent config file, on Centos 6 it will be in: /etc/ambari-agent/conf/ambari-agent.ini Just set the "hostname" property to be the DNS name for the node that has ambari-server running, and restart your ambari-agent instances. Hope this helps! Bob
... View more
04-06-2016
02:46 PM
1 Kudo
Hi @Anna Shaverdian, In addition to what has already been posted, I wanted to mention something about Blueprint exports. In general, you should be able to re-use an exported Blueprint from a running cluster without any changes. There are a few exceptions,however: 1. Passwords: Any password properties are filtered out of the exported Blueprint, and must be added in explicitly during a redeployment attempt. 2. External Database Connections: I'm defining an "External" DB connection as any DB that is used by the cluster, but is not managed by Ambari. The DB instance used by Ranger is one example, but Oozie and Hive can also use separate instances that are not managed by Ambari. In these cases, the Blueprint export process filters out these properties, since they are not necessarily portable to the new environment. From what I've seen in this posting, these are the kinds of properties that you'll need to add back into your Blueprint or Cluster Creation template, as the other posts have indicated. Hope this helps, Thanks, Bob
... View more
03-31-2016
01:20 PM
1 Kudo
This approach should work fine, but I would suggest a refinement that may improve performance somewhat, and will also make the returned status play load quite a bit smaller in size: If you use the partial response syntax provided in the Ambari REST API, you can filter out much of the data returned in the request resource returned by the call listed above. An example of using the partial response syntax is below: http://ambari-hostname:ambari-port-number/api/v1/clusters/clusterone/requests/1?fields=Requests/completed_task_count,Requests/request_status,Requests/task_count,Requests/failed_task_count The "fields" query parameter is used to limit the fields returned from the request resource. The fields I've mentioned here are the set I use, but you can also check the other properties returned by the resource, if a particular property is more straightforward to use for this type of monitoring. I use this syntax quite a bit when I want to monitor the status of a Blueprint deployment.
... View more
01-05-2016
08:36 PM
The comment from jspeidel is correct. If Oozie HA is being used in this Blueprint, then "oozie.base.url" must be set explicitly to the address of the loadbalancer being used, since Oozie HA requires a separate loadbalancer that is external to each Oozie instance. If you are just testing out your Blueprint, then you can just set this property to be the address of one or the other Oozie instances in your cluster. Here's a great reference on Oozie HA, that will be helpful in setting up an Oozie HA Blueprint: https://oozie.apache.org/docs/4.1.0/AG_Install.html#High_Availability_HA
... View more
12-17-2015
04:43 PM
1 Kudo
@hkropp, @Ali Bajwa, It is possible to define configuration groups when using a Blueprints install. Configuration in Blueprints can be specified at almost any level (cluster, blueprint, host group). A "host group" in Blueprints defines a set of components and configuration that can be applied to a host or group of hosts. These actual hosts are mapped in the Cluster Creation Template. You can specify configuration at the level of "host groups" in the Blueprint or Cluster Creation Template, and this has the affect of only applying these configuration overrides to the machine or group of machines that are mapped to this host group at deployment time. Specifying configuration at the "host group" level will cause the Ambari Blueprints processor to create Ambari configuration groups to manage that host-specific configuration. More information on Blueprint syntax and how configurations are applied may be found at: https://cwiki.apache.org/confluence/display/AMBARI/Blueprints#Blueprints-BlueprintStructure
... View more
- « Previous
-
- 1
- 2
- Next »