Member since
04-26-2019
10
Posts
0
Kudos Received
0
Solutions
06-20-2019
10:44 AM
Thanks @Jay Kumar SenSharma. I thought so but wanted to confirm from community. For the local repository suggestion, we do have that internally but to reduce the time in getting the cluster UP on AWS/Azure/GCP, we need to create custom images with packages pre-installed. Also, since we aren't upgrading versions that frequently, this looks to be more sustainable solution. Thanks for your input though. I will give custom AMIs a try and see how it goes.
... View more
06-19-2019
10:34 AM
Hello all, I want to know if there is any option for configuring and starting components in a pre-warmed (with components like hive, yarn, etc.) instance. I see there is a provision action: "INSTALL_ONLY" but in this case, I want to use an installation image that has the required packages installed and just configuration and starting the components is required. Please feel free to ask any additional info if required. Looking forward for some pointer in this regard.
... View more
Labels:
- Labels:
-
Apache Ambari
06-11-2019
09:33 AM
@Anshuman Mehta: Did you achieve success in what you were trying? I have a similar use-case where I wan't multiple hosts/nodes in 'master' host-group and want to setup HA between components using that.
... View more
06-10-2019
12:47 PM
Hello all, I am getting this warning and cluster formation is waiting at ' Logical Request: Provision Cluster ' state. WARN [pool-20-thread-1] BlueprintConfigurationProcessor:1546 - The property 'dfs.namenode.secondary.http-address' is associated with the component 'SECONDARY_NAMENODE' which isn't mapped to any host group. This may affect configuration topology resolution. Although the error looks self-explanatory, I am surprised to see it since I don't have any 'SECONDARY_NAMENODE' as component in my blueprint and also the configuration in blueprint doesn't show any property 'dfs.namenode.secondary.http-address' Can anyone explain what is happening and how I can correct it? { "hdfs-site": { "dfs.client.failover.proxy.provider.testcluster": "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider", "dfs.datanode.data.dir": "/tmp/datanode", "dfs.datanode.failed.volumes.tolerated": "0", "dfs.ha.automatic-failover.enabled": "true", "dfs.ha.fencing.methods": "shell(/bin/true)", "dfs.ha.namenodes.testcluster": "nn1,nn2", "dfs.namenode.checkpoint.dir": "/tmp/hdfs/namesecondary", "dfs.namenode.http-address.testcluster.nn1": "%HOSTGROUP::ec2-xx-xx-xx-xx.us-west-2.compute.amazonaws.com%:50070", "dfs.namenode.http-address.testcluster.nn2": "%HOSTGROUP::ec2-yy-yy-yy-yy.us-west-2.compute.amazonaws.com%:50070", "dfs.namenode.https-address.testcluster.nn1": "%HOSTGROUP::ec2-xx-xx-xx-xx.us-west-2.compute.amazonaws.com%:50470", "dfs.namenode.https-address.testcluster.nn2": "%HOSTGROUP::ec2-yy-yy-yy-yy.us-west-2.compute.amazonaws.com%:50470", "dfs.namenode.name.dir": "/tmp/namenode", "dfs.namenode.rpc-address.testcluster.nn1": "%HOSTGROUP::ec2-xx-xx-xx-xx.us-west-2.compute.amazonaws.com%:8020", "dfs.namenode.rpc-address.testcluster.nn2": "%HOSTGROUP::ec2-yy-yy-yy-yy.us-west-2.compute.amazonaws.com%:8020", "dfs.namenode.shared.edits.dir": "qjournal://%HOSTGROUP::master%:8485/testcluster", "dfs.nameservices": "abhardwaj-test-enterprise1", "dfs.replication": "1" }
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
05-10-2019
12:03 PM
Anything further on this @Geoffrey Shelton Okot?
... View more
05-07-2019
03:23 PM
Thanks for the reply @Lester Martin. Will the blueprint take care of setting up/re-configure component level HA with this new node? In my scenario, I am setting up cluster using the HDP's ansible repo. I plan to add new node using the same mechanism where I can get the VMs in the existing cluster with active master (as user's input) and make sure this new node is added to the cluster via Ansible
... View more
05-06-2019
09:54 PM
Hey @Geoffrey Shelton Okot, I actually installed ambari-agent from the same repo as my active master. Also, made the correct IP address available in ambari-agent.conf file. That is the reason it showed the new node when I checked using ambari API. But the cluster name was missing for this new node. For all other existing nodes, it shows the 'Clustername' key but not for the new node addition.
... View more
05-03-2019
03:51 PM
Hi @Geoffrey Shelton Okot, Sorry for delay in response. I was betting on gmail to send me a notification of any updates on my question. Thanks for writing a detailed answer to my questions. You have some valid points which I am definitely going to recommend for any new prod/QA setups. We have 2 masters in our HA setup so that is fine. Both masters have ZOOKEEPER Server but as per your comments 3 are needed to form HA i.e. ENSEMBLE (Please correct me if my understanding is wrong) We only have 1 worker node and as per your recommendation, I will suggest to use 3. I want to explore the node addition using ambari blueprint APIs which will be fired using Ansible. I had previously tried to add a new node in HA setup in an existing cluster with very little success. The new node addition request that I submitted manually was accepted as 200 OK. But, I couldn't see the node on Ambari GUI. I then checked the list of node known to ambari-server and found that the new node is listed but isn't part of the cluster. Command used to check hosts: curl -u admin:admin http://c7201.ambari.apache.org:8080/api/v1/hosts I added new node using this command: curl -i -H "X-Requested-By: ambari" -u admin:admin -X POST -d @basic1.json http://c7201.ambari.apache.org:8080/api/v1/clusters/<clustername>/hosts/<new-node> and my basic1.json was: {
"blueprint" : "<cluster-name>_blueprint",
"host_group" : "ae-master2"
}
Please suggest further. Thanks again.
... View more
05-03-2019
03:51 PM
Hello all, My questions are related to HA blueprints. Suppose I have a 2 master/1 worker setup on AWS/Azure. Now one of my master is down/faulty and I want to add a new master(master3) to existing cluster. Also, I want to remove the faulty/unhealthy master(master1) from the cluster. While instantiating the cluster, I had used the HA blueprint with 2 hostgroups (master1 and master2) with their set of components/services. Now if I say spawn master3 node on cloud platform which has all the required accesses to the existing cluster. Will the following approach work for getting a HA enabled healthy cluster with new master3? I get the cluster name using Ambari APIs. Create a host_mapping.json with host-group as master1(since master1 is faulty) and use blueprint API to register new master3. Will Ambari take care of the installing and starting the components/services and also rebalance the cluster with HA settings? Is there anything other approach using which I can achieve the HA again? OR Will the re-configuration of all the components in HA be handled outside the ambari's scope? Please let me know your views. I have limited exposure to ambari at this time so feel free to correct me or atleast give some pointers.
... View more
Labels:
- Labels:
-
Apache Ambari
04-27-2019
03:39 PM
Hello there, I am fairly new to Hadoop and Ambari so please excuse if you feel some of the queries are absurd or illogical. TIA 🙂 ----- We have a HA enabled Ambari cluster where there are 2 masters(with ambari-server and ambari-agent) and 1 worker(with ambari-agent) installed on them. The setup is being created using the Ansible GIT project for with few modifications. We are able to get HA for ambari-server and few components like zookeeper, etc. The requirement for which I am seeking help from community is to have a case where if one of the master in HA cluster goes down, there is a need to add a new node to existing cluster that will join as a standyby master node. IMO with limited exposure to ambari, I feel that all the components in HA will need to be re-configured to add this new node. Is there a better way to do this or any way at all to achieve this? Can blueprints help in this case? If I set a host_mapping.json file and inside it, hostgroup could be of 'master' type and hence Ambari itself could figure out which components to configure and where to setup HA? Thanks for being patient for reading so much information. Please feel free to comment on this and any pointers will be really helpful.
... View more
Labels:
- Labels:
-
Apache Ambari