Member since
09-21-2015
26
Posts
4
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5059 | 05-19-2016 11:47 AM | |
2320 | 05-18-2016 09:21 PM | |
5084 | 05-17-2016 11:09 AM | |
5222 | 02-01-2016 09:24 PM | |
5126 | 01-26-2016 12:23 PM |
05-22-2016
10:14 PM
Hi, Are you hitting any errors when you set up with the hive roles created twice? 2 metastores are needed for HA and having two HS2 is also a valid configuration.
... View more
05-19-2016
11:47 AM
1 Kudo
Hi Kartik, Regarding the sample configs you sent, - You don't need to repeat all the master roles on each master. Most of them only need to be on the first master, just the extra ones for HA need to be on additional nodes - If using kafka, please make sure the appropriate product version, and a repository url corresponding to the same version, are specified in the config file - where the master role assignments differ, the group names also need to be unique (for eg the group name for masters-1 and masters-2) For setting up HDFS HA, we have a sample config file to help with the role assignments. This would cover examples for some of your other HA questions as well. https://github.com/cloudera/director-scripts/blob/master/configs/aws.ha.reference.conf Hope this helps...
... View more
05-18-2016
09:21 PM
Hi, If you're using the generated config file, the key referenced by it should be available on the filesystem of your Director instance. The filesystem path should be mentioned in the config file that was generated (labeled "privateKey"). Or else could you try running 'find' for it, by name, on the filesystem?
... View more
05-18-2016
10:33 AM
Yes, sure. I'll take a look today and get back to you.
... View more
05-17-2016
11:09 AM
Thanks Kartik. Let me know if Kafka removal from the conf file helped address the first run issues.
... View more
05-17-2016
08:58 AM
1 Kudo
Hi Kartik, In the configuration file that you use for bootstra, * The list of services should include SOLR, KS_INDEXER, ZOOKEEPER, HBASE * You would need to define 3 master roles * The SERVER role for zookeeper needs to be on all 3 (for ZK HA/quorum) * The MASTER role for hbase needs to be on 2 master roles * For KS Indexer, add the role HBASE_INDEXER to any 1 master (e.g. KS_INDEXER: [HBASE_INDEXER]) * For solr, add the role SOLR_SERVER to any 1 master (e.g. SOLR: [SOLR_SERVER]) Note that the masters role groups need to have unique names, e.g. masters-1 {}, masters-2 {} and so on.
... View more
05-17-2016
08:39 AM
You can delete KAFKA from the list of services, and delete the KAFKA_BROKER role as well. Let me know if this helps. What version of Director are you using? And for debugging purposes, could you let me know if your configuration file had a "products" version and "parcelRepositories" URL specified for Kafka?
... View more
05-16-2016
02:08 PM
Hi Kartik, The specific error message "Failed to download diagnostic data" is different than the first run failure. Looking further down at the log you provided, it looks like the Kafka service failed to start: [2016-05-16 06:46:40] ERROR [pipeline-thread-1] - c.c.l.b.UnboundedWaitForApiCommand: Command Start with ID 91 failed. Details: ApiCommand{id=91, name=Start, startTime=Mon May 16 06:44:52 EDT 2016, endTime=Mon May 16 06:44:52 EDT 2016, active=false, success=false, resultMessage=Failed to start service., serviceRef=ApiServiceRef{peerName=null, clusterName=C5-Reference-AWS, serviceName=CD-KAFKA-tzWIrNFs}, roleRef=null, hostRef=null, parent=null} A few follow-up questions to help diagnose this: - Are you using the UI to set up this cluster, or the configuration file on the CLI? - Are you enabling kerberos by any chance? - What versions of CM, CDH, and Kafka are being installed? Are they the default that Director determines, or are you overriding it (the product version and/or the repository URL), as part of your setup? - If you still have access to your Clouder Manager instance, would you be able to help pull out the Kafka error? I'll provide the steps below. Once you log in, a) On the home page, click on Recent Commands, and here find First Run. If you drill down, you should eventually come across the Kafka start failure. There should be links to stdout, stderr and role logs, which may have more details on the cause of failure b) Alternately, on your home page, click on the Kafka service. Under the Commands tab, you might find history for the failed Start commands, which should lead you to the same logs. Thanks, Jayita
... View more
03-11-2016
01:47 PM
Hi again, I went through this setup, but was not able to reproduce the problem. There may have been a few things different in what each of us tried. To diagnose further, - did you run packer directly, or using build-ami.sh? - did you override the cm url as well, or just cdh? - was your ami redhat6 or some other variant / operating system? (the parcel installed is redhat6, but if it were redhat7 it would need a different one) In my case I ran with build-ami.sh, using redhat6, and only overrode the cdh url, so I ended up with cm5.6 (it picks up the latest cm by default). My command was as follows, if you would like to try it: PACKER_VARS="-var vpc_id=vpc-123456 -var subnet_id=subnet-123456 -var security_group_id=sg-123456 -var root_device_name=/dev/sda1" sh build-ami.sh us-west-1 ami-6283a827 "my_ami_name" http://archive-primary.cloudera.com/cdh5/parcels/5.5.1/ My directory listing of /opt/cloudera/parcel-cache and /opt/cloudera/parcel-repo matches yours, so we know things worked until there... A couple other places to check on your system: Could you pull up the Cloudera Manager log to see if there are any errors around finding the parcel? You can find it on the instance at /var/log/cloudera-scm-server/cloudera-scm-server.log In mine, after it starts Jetty server, I see this line "Discovered parcel on CM server: CDH-5.5.1-1.cdh5.5.1.p0.11-el6.parcel". Does your log have any errors around finding the parcel? In the CM UI, top nav, there is an icon for parcels. Under this, do you see any errors? On this page if you click on Edit Settings, does the local parcel repository path show /opt/cloudera/parcel-repo and remote path show http://archive.cloudera.com/cdh5/parcels/5.5.1/ ? If you check the CM API for parcels, what does it return? http://<cm>:7180/api/v11/clusters/<clustername>/parcels Thanks, Jayita
... View more
03-10-2016
10:16 AM
Thanks for providing the information. I am trying to reproduce this and will get back to you.
... View more