Member since
07-12-2013
435
Posts
117
Kudos Received
82
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2009 | 11-02-2016 11:02 AM | |
3102 | 10-05-2016 01:58 PM | |
7729 | 09-07-2016 08:32 AM | |
8186 | 09-07-2016 08:27 AM | |
2107 | 08-23-2016 08:35 AM |
02-11-2016
12:50 PM
Yeah you can register again with the same information and it should work. If it goes wrong again, as I said - just send me the access code and I can dig deeper.
... View more
02-11-2016
11:12 AM
I would double check that your access key is correct and unique for each attempt (if it's invalid, there unfortunately isn't a good way to notify you). An access key is linked to specific VMs, so they're not reusable. Also, make sure you don't reboot the instances until the initial deployment is finished. When you reboot AWS assigns you new IPs, which the cluster can deal with once it's running, but as long as you need external control (i.e. for the initial deployment) the IPs must be consistent. Anything else that depends on the user to be done correctly should result in the stack being marked as a failure in CloudFormation, so if you're pretty sure the above issues don't apply there may have been some other kind of failure we should take a look at. If you would like to send me a direct message with the access keys for your attempts I can retrieve the detailed logs for your specific case and investigate other possibilities.
... View more
01-31-2016
06:16 AM
1 Kudo
In your case it looks like you need to restart hadoop-hdfs-datanode.
... View more
01-15-2016
07:15 AM
1 Kudo
In a nutshell it looks like the cluster hit some delays trying to download Red Hat packages and eventually when it got much further past that the machines were being torn down. Almost all other clusters have been deploying fairly well since my last post, so I'll reach out to via private message to confirm some details and help you get a working cluster.
... View more
01-14-2016
08:23 PM
Yeah you're *supposed* to get the email within about 10 minutes of the cluster being green within AWS. If it takes more than 15-20 I would usually just delete the cluster and contact us. In this case there were a number of abandoned deployments (i.e. machines getting deleted mid-deployment and the system waiting to see if they come back) hogging resources. Everything that was delayed should now be running and you should receive an email momentarily if you didn't already. Thanks for letting us know about your delay - sorry for the inconvenience.
... View more
01-13-2016
09:53 AM
It looks like you're logged into 'cloudera3', which is one of the DataNodes / worker nodes and should not have MySQL on it. Your welcome email when the cluster is ready should link you to the guidance page which lists all the nodes in the cluster. The Manager Node is the one you should be logging into to work through the tutorial.
... View more
01-13-2016
06:48 AM
1 Kudo
The links to the form are in the form of the 3 buttons, "Cloudera Enterprise", "Cloudera + Tableau", and "Cloudera + Zoomdata" at www.cloudera.com/live. I agree that isn't as clear as it used to be - I'll see if that can be updated soon. To answer your question, Cloudera Live AWS codes are single use, but you can register for new access codes.
... View more
01-12-2016
07:08 AM
1 Kudo
Everything in CDH is set up and running without Cloudera Manager. You can complete the tutorial (although a few sections will be irrelevant and get hidden if Cloudera Manager isn't launched), and all the examples in Hue should work (Hue has an option built-in to install examples for Hive & Impala SQL queries, the HBase and Search apps, Pig scripts and more). You can also run Hadoop and Spark jobs, etc. Pretty much anything you would actually use to store and process data should work.
... View more
01-10-2016
06:32 PM
2 Kudos
If you ran Sqoop with the '-m 1' argument, you should only expect to see one *.part file. If you're confused because the screenshot shows 3, that is because the tutorial is adapted to several different environments, and one clusters with multiples disks, multiple mappers should be used, and thus you end up with multiple partitions of the data in HDFS. If you're on the QuickStart VM, it's likely that nothing is wrong.
... View more
01-07-2016
02:04 PM
I'd need to take a look at a cluster and won't get a chance to for a little while, but that may not be a problem. Si I would just proceed as-is. Displaying shard replicas in the interface might no longer happen - I personally found it cluttered the interface for something I never used, so I wouldn't be surprised if that was removed recently and the screenshot in the tutorial simply needs to be updated. Also, I believe it's "step 1" that is not necessary since it's already done when installing the sample datasets (the command in #1 creates a blank configuration, we're providing the already-edited configuration). I think it's step #2 that you were missing, where that configuration gets uploaded to Zookeeper.
... View more