Member since
07-12-2013
435
Posts
117
Kudos Received
82
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1227 | 11-02-2016 11:02 AM | |
1853 | 10-05-2016 01:58 PM | |
6722 | 09-07-2016 08:32 AM | |
6272 | 09-07-2016 08:27 AM | |
1225 | 08-23-2016 08:35 AM |
03-15-2016
12:56 PM
1 Kudo
The script I suggested you use is just an automated version of the process documented at the link you provided. I would just try running the script, and when that's done going back and retrying the Phoenix parcel.
... View more
03-15-2016
12:30 PM
It's a button on the desktop in the QuickStart VM - I saw you were using Docker, so I added the CLI equivalent of 'sudo /home/cloudera/parcels' above <- that should run the same script that will install the parcel that matches the QuickStart image, remove the Linux packages, etc..
... View more
03-15-2016
12:07 PM
Via CLI, that script can be invoked as `sudo /home/cloudera/parcels`.
... View more
03-15-2016
12:06 PM
The reason is that you can't mix a Linux packages install with a parcels install. By default, the VM uses Linux packages so that you don't have to use CM unless you have the memory. There's a button on the desktop to migrate CDH to a parcel installation. After you've run that, the Phoenix parcel install should work.
... View more
03-13-2016
07:58 PM
1 Kudo
As the previous reply said, you need to use the Hive Query Editor. The error shows up if you use the Impala Query Editor because you're using a library written for Hive.
... View more
03-13-2016
07:19 AM
That public IP does not match the Manager Node of any cluster that's been deployed so I can't be sure (send me a private message with your access code if you'd like me to be sure), but judging by the timing of your message and your username, the final email was successfully sent on Fri, Mar 11, 2016 at 5:14 PM Pacific Time. I would suggest checking your spam folder. If you've shut down or restarted the instances in your cluster, they'll have different public IPs. The cluster itself will pick up on the change but the links in the email and the IP addresses the Live service recorded won't be useful anymore (and I suspect this is why I can't find a cluster with the IP 54.175.223.217. If you can SSH to your Manager Node, the Hue and CM password can be found in /var/tmp/cm_hue_admin_password.txt. Other information you need can be found by just entering that IP address into your browser.
... View more
03-01-2016
11:41 AM
That part of the tutorial has you use the Impala query editor - the query should work in Impala's particular flavor of the query language.
... View more
03-01-2016
11:40 AM
I'm not sure this question is referring to the QuickStart VM. CM is already installed, it's just not started by default and you need to use the provided scripts to switch to a CM-based deployment. This question is referring to the CM installer. I'm not surprised it would fail with 1 GB of RAM but I would expect it to work with 6 GB of RAM (even though a production cluster should be on much larger machines). Have you looked at the /var/log/cloudera-manager-installer/6.start-embedded-db.log file, or is that where this is coming from? It looks like it's specifically the DB server that's failing, so other files in /var/log/cloudera-manager may be helpful. I don't recall specific names, but there's on that's clearly DB-specific, and I suspect that's where you'll find the real root cause.
... View more
02-15-2016
04:00 PM
So I can't tell you anything about your current formation because as I said, that access code appears to be incorrect, so I can't pull up the logs. I tried some Zoomdata deployments and did run into one issue with some operations timing out because they were trying to reach a Zoomdata package repository that no longer exists. I have fixed that problem so it shouldn't affect any new deployments. It's possible that was affecting your previous attempts, but not in the logs I've found that appear to match your forum username. So if you do try another deployment, just be sure you have a new access code that you haven't used before, and don't pause the instances prematurely.
... View more
02-15-2016
06:55 AM
That doesn't appear to be a valid access code and I can't find any new access codes with the same name as the ones I was looking at last week. You say you 'started up the instances' for today's session: again, restarting or stopping instances before the deployment has finished will fail, and there's no recovering from that. I started investigating the other failure that's now been seen twice during our Zoomdata deployment. I have not yet gotten to the bottom of that, so I'll still need to find out why that's happening and get back to you. Given that your current master node's progress bar is stuck on that step, it's possible you're hitting that failure again.
... View more
02-12-2016
12:58 PM
In either CloudFormation or your list of instances, the instance Zoomdata running on is specifically tagged. On that instance on port 8080, Zoomdata should be running. But again, Zoomdata and the data for that tutorial is one of the last things to be set up. I can look up your clusters and see more specifically what's going on, but it's much easier if you can send me a private message with the access code you used for a given attempt.
... View more
02-12-2016
07:34 AM
So if you restarted your instances before the deployment finished, the dpeloyment will not have finished. You may be missing datasets or some configuration in your cluster. If you didn't already read it, read the "Stopping and Starting Instances" section of the documentation, as there are some other caveats you should be aware of: http://www.cloudera.com/get-started/cloudera-live/aws-documentation.html
... View more
02-11-2016
04:10 PM
Yes, you should get an email with credentials once the progress bar has completed, however even without the email you can find the credentials to the Hue and CM service in a file on the Master Node (you log into using the ssh key you selected at deployment time) in the file /var/tmp/cm_hue_admin_password.txt. Of course, usually the reason you don't get the final email is if there's a problem in the deployment, and in that case Hue and CM may not yet be set up properly. I believe I found the access codes for your 2 attempts and had a look at the logs. In one case, it looks like the machines were shutdown mid-deployment outside of our control. It did look like the transfer of the sample datasets from S3 were taking longer than usual, so I'll follow up on that and see if that can be improved or if we need to adjust the time the progress bar expects it to take. Another attempt looks like an SSH error when setting up the Zoomdata sample datasets that might also have been a shutdown of the instances outside our control, but I've seen the exact same error on another Zoomdata deployment recently so that may be a bug that I'll investigate further. I'll post back here when I have a solution deployed or a workaround to recommend for you.
... View more
02-11-2016
12:50 PM
Yeah you can register again with the same information and it should work. If it goes wrong again, as I said - just send me the access code and I can dig deeper.
... View more
02-11-2016
11:12 AM
I would double check that your access key is correct and unique for each attempt (if it's invalid, there unfortunately isn't a good way to notify you). An access key is linked to specific VMs, so they're not reusable. Also, make sure you don't reboot the instances until the initial deployment is finished. When you reboot AWS assigns you new IPs, which the cluster can deal with once it's running, but as long as you need external control (i.e. for the initial deployment) the IPs must be consistent. Anything else that depends on the user to be done correctly should result in the stack being marked as a failure in CloudFormation, so if you're pretty sure the above issues don't apply there may have been some other kind of failure we should take a look at. If you would like to send me a direct message with the access keys for your attempts I can retrieve the detailed logs for your specific case and investigate other possibilities.
... View more
01-31-2016
06:16 AM
1 Kudo
In your case it looks like you need to restart hadoop-hdfs-datanode.
... View more
01-15-2016
07:15 AM
1 Kudo
In a nutshell it looks like the cluster hit some delays trying to download Red Hat packages and eventually when it got much further past that the machines were being torn down. Almost all other clusters have been deploying fairly well since my last post, so I'll reach out to via private message to confirm some details and help you get a working cluster.
... View more
01-14-2016
08:23 PM
Yeah you're *supposed* to get the email within about 10 minutes of the cluster being green within AWS. If it takes more than 15-20 I would usually just delete the cluster and contact us. In this case there were a number of abandoned deployments (i.e. machines getting deleted mid-deployment and the system waiting to see if they come back) hogging resources. Everything that was delayed should now be running and you should receive an email momentarily if you didn't already. Thanks for letting us know about your delay - sorry for the inconvenience.
... View more
01-13-2016
09:53 AM
It looks like you're logged into 'cloudera3', which is one of the DataNodes / worker nodes and should not have MySQL on it. Your welcome email when the cluster is ready should link you to the guidance page which lists all the nodes in the cluster. The Manager Node is the one you should be logging into to work through the tutorial.
... View more
01-13-2016
06:48 AM
1 Kudo
The links to the form are in the form of the 3 buttons, "Cloudera Enterprise", "Cloudera + Tableau", and "Cloudera + Zoomdata" at www.cloudera.com/live. I agree that isn't as clear as it used to be - I'll see if that can be updated soon. To answer your question, Cloudera Live AWS codes are single use, but you can register for new access codes.
... View more
01-12-2016
07:08 AM
1 Kudo
Everything in CDH is set up and running without Cloudera Manager. You can complete the tutorial (although a few sections will be irrelevant and get hidden if Cloudera Manager isn't launched), and all the examples in Hue should work (Hue has an option built-in to install examples for Hive & Impala SQL queries, the HBase and Search apps, Pig scripts and more). You can also run Hadoop and Spark jobs, etc. Pretty much anything you would actually use to store and process data should work.
... View more
01-10-2016
06:32 PM
2 Kudos
If you ran Sqoop with the '-m 1' argument, you should only expect to see one *.part file. If you're confused because the screenshot shows 3, that is because the tutorial is adapted to several different environments, and one clusters with multiples disks, multiple mappers should be used, and thus you end up with multiple partitions of the data in HDFS. If you're on the QuickStart VM, it's likely that nothing is wrong.
... View more
01-07-2016
02:04 PM
I'd need to take a look at a cluster and won't get a chance to for a little while, but that may not be a problem. Si I would just proceed as-is. Displaying shard replicas in the interface might no longer happen - I personally found it cluttered the interface for something I never used, so I wouldn't be surprised if that was removed recently and the screenshot in the tutorial simply needs to be updated. Also, I believe it's "step 1" that is not necessary since it's already done when installing the sample datasets (the command in #1 creates a blank configuration, we're providing the already-edited configuration). I think it's step #2 that you were missing, where that configuration gets uploaded to Zookeeper.
... View more
01-07-2016
11:40 AM
1 Kudo
That command depends on the previous 2: cd /opt/examples/flume solrctl --zk quickstart:2181/solr instancedir --create live_logs ./solr_configs Did you execute those? And if so, were there any error messages?
... View more
01-06-2016
09:12 AM
The message referring to keyboard / mouse capture simply means that moving the focus from your host desktop to the virtual machine will not happen seamlessly. When you click in the VM, it locks your mouse cursor inside it, and you have to press a special key combination to move the mouse cursor back out. Sometimes it's possible to configure the VM so that the mouse can just move freely in and out. It shouldn't affect copy / paste. Can you be very specific about where you're copying and pasting to? It's unclear whether your talking about copying from the web browser, or pasting into the terminal, or what. If you can narrow down exactly what part of that is failing, that would help. Another factor is if you're trying to copy something from your host to the VM and vice-versa. That usually requires special set up to work, and the specific depend on what platform you're using (VMware, VirtualBox, etc.)
... View more
01-06-2016
06:32 AM
The keyboard shortcuts for copy / paste in the terminal are Ctrl + Shift + C and Ctrl + Shift + V (respectively). Have you tried those? They're not supposed to be disabled...
... View more
01-05-2016
08:57 AM
1) I don't know what the issue is with connecting to MySQL. Definitely ensure that MySQL is running (sudo service mysqld status; sudo service mysqld restart). The configuration is in /etc/cloudera-scm-server/db.properties. 2) We do not have a recommended solution for running Hadoop on Docker. Director currently supports cloud platforms like AWS but not Docker. Docker is somewhat counter-productive for production Hadoop clusters because even though Hadoop is designed to get a bunch of machines to work together, having the cluster split into fewer, bigger pieces is better. Docker essentially partiations a machine into smaller pieces. It can be handy for testing, etc. when performance doesn't matter, but it requires a lot of networking setup to make DNS and IP addresses, etc. work the way Cloudera Manager and Hadoop assume they do.
... View more
01-05-2016
08:41 AM
2 Kudos
You're supposed to replace {{cluster_data.manager_node_hostname}} with the hostname of the MySQL server, which on Cloudera Live is the Manager Node. If you're using Cloudera Live, you should refer to the copy of the tutorial as linked to in your welcome email. It will resolve all such variables for your specific cluster. The copy on the website is just for informational purposes - the procedure assumes a lot about the cluster, so you should really just use the copy that's on your cluster.
... View more
01-04-2016
06:34 AM
Your instances need to stay up at least until the deployment is finished. Since IP addresses are likely to change when you restart an instance, it's unpredictable what's going to happen if you stop the machines in the middle of the installation. You'll need to delete your stack and redeploy with a new access code. Details on safely stopping and starting the cluster after it is deployed can be found here under 'Stopping and Starting Instances': http://www.cloudera.com/content/www/en-us/get-started/cloudera-live/aws-documentation.html.
... View more
12-22-2015
08:21 AM
5 Kudos
Kevin, 4096 MB should be sufficient unless you want to launch Cloudera Manager. All of the CDH services should be able to start up and simple examples should work pretty reliably. As for your issue, I would suggest going to File -> Import Appliance... and opening the .ovf file in the directory you unzipped. That file will set up the virtual hardware for you (instead of you setting up everything but the hard disk) so it eliminates a lot of variables. Not sure what the root cause of the issue is, but that's a good first step.
... View more