I have been attempting to create a cluster using Cloudera Director v1.0.1. I was able to take the distributed aws.reference.conf generated by the Cloud Formation Launch Cloudera EDH template and build a 48 worker cluster using the type:c3.8xlarge image: ami-18a23f28.
Our performance team absolutely will not allow the use of SSD in this cluster, so after looking around and speaking with AWS support, I chose this as my worker nodes:
I first tried RHEL 6.4 PVM image ami-b8a63b88 but it didn't work, so I took the AWS suggested image ami-f032acc0 but it didn't work either.
Didn't work means, the host installs worked but the Preparing instances and deploying of CM agents was stuck for 3 hrs before I gave up. For the current generation c3.8xlarge I can get the bootstrap to complete in an hour and a half tops. I have gone through the build/terminate stage several times.
So my question after this long-winded data dump is are the older instances that supported magnetic storage supported using the Director?
Hope my description is clear. Thanks for any help.
The WEB UI page didn't seem to have any history other than canceled, failed due to suspended task but trace to see. Management wants me to punt on director and go to the build it on your own without wizards route.
Thanks anyway for the help. Time constraints will not let me follow-up.
First of all thanks for using Cloudera Director! I am happy to see everything worked fine while configuring multiple medium size clusters.
With regard to m2.4xlarge - there is no reason why it shouldn't work when running a supported operating system. I'm re-testing this now. Actually Director should work just fine with any instance type provided by AWS (using either SSDs or magnetic drives for ephemeral storage).
In terms of operating systems Director currently works with Red Hat Enterprise Linux 6.4 (PV or HVM)) or CentOS 6.4 (the official AMI from the AWS Marketplace). Amazon Linux (ami-f032acc0) is not currently supported.
In case it helps anyone else ....
I encountered this problem because I didn't specify any HDFS Data Nodes in my configuration. I had specified services that depended on HDFS in the cluster in a 'master' template and had one instance of master, but my worker nodes had the HDFS: [DATANODE] role commented out.
It makes sense that the cluster couldn't start.