Member since
07-12-2013
435
Posts
117
Kudos Received
82
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1950 | 11-02-2016 11:02 AM | |
3008 | 10-05-2016 01:58 PM | |
7630 | 09-07-2016 08:32 AM | |
8051 | 09-07-2016 08:27 AM | |
2000 | 08-23-2016 08:35 AM |
10-10-2014
03:34 PM
This is an archive of the official Cloudera Manager mailing list. I believe this is still the recommended way to reset your password: http://grokbase.com/t/cloudera/scm-users/126ekek7kz/how-to-change-admin-password-in-cloudera-manager Did you verify that there wasn't extra whitespace included in what you were pasting though? I found that was very easy to do on accident... Also, I misspoke - you should receive 2 emails if you're using the 1-click deployment as part of signing up for GoGrid. Since you spun your cluster up from inside the GoGrid portal, it is correct that you only received the second email - the one with credentials.
... View more
10-10-2014
02:44 PM
>> I am not sure the mechanics of deploying the server You will recieve an email when your virtual hardware has been provisioned by GoGrid, and you will receive a second email when all of the Cloudera software, example data, etc. has been installed on those machines. All told, this takes usually about 45 minutes, but has been known to take up to 2 hours when things are really busy. >> Not able to login to Cloudera Manager via browser. Password received via email is rejected. May be I need a password reset. What is the recommended way to reset? I found a little glitch that if you're copy + pasting directly from the email, sometimes you copy some of the whitespace before the password as well, so you're entering a tab + the password. Can you confirm this is not the case by pasting into a text editor (Notepad or something) first just to verify what's in the clipboard, and that there isn't extra space being entered? If that's not the case I believe there are ways to reset the password (as you can SSH as root into the CM server using the credentials in your GoGrid portal) but I will need to look around for the officially recommended method of doing so. There are some discussions on other forums on this topic but some of them are quite possibly outdated.
... View more
10-06-2014
02:49 PM
Kevin, The Hadoop ecosystem is a lot more complex than just a simple key-value store, but a key-value store is sufficient to answer your question. Let's say you have data of the form "Key => Value1", in one location, and "Key => Value2" in another location. If you know one value, it's not trivial to find all related values. Unless.... you have an inverted index that allows you to look up the key for any given value, and then use that key to look up other values. For instance, say I have a database that lists the mailing address for each person. e.g. Kevin => 1 Apple St, Sean => 2 Zebra Ln This is great if you just want to see where specific people live, but what if your question starts with having an address and needing to know all the people who live there? Instead of the key being the name and the value being the address, you create a different index that inverts this: e.g 2 Zebra Ln => Sean, 1 Apple St => Kevin Now it's easy to see everyone who shares an address because they would also share a key (which is actually not doable in some key-value stores - in which case you would modify the value field to encode a sequence of values). I know it's been a while since you asked your question, but I hope this helps!
... View more
10-06-2014
02:27 PM
1 Kudo
Ejaz, The qcow2 file is just a disk image. To boot it up with KVM you'll also need to define the amount of RAM, CPU resources, etc. The reason for distributing just the disk image is that it's usable in KVM, but is also commonly used in all sorts of systems that have their own ways of defining the rest of the hardware. Since you're on Ubuntu, I'd refer you to these instructions for creating virtual machines from existing disk images: https://help.ubuntu.com/community/KVM/CreateGuests#Create_a_virtual_machine_from_pre-existing_image. Since you're running Ubuntu desktop, I'd also suggest you look at virt-manager (also discussed on that page) that provides a graphical interface for doing all this. It's what I use and you'll find it to be much easier to work with but also very powerful.
... View more
06-19-2014
06:59 PM
1 Kudo
The app is disabled by default because it requires some features that are not quite ready to be completely supported across CDH and CM just yet, but look for them in a future release. You can find the details about getting it working here: http://gethue.com/a-new-spark-web-ui-spark-app/. Specifically, you can enable the app by adding the following to the "Hue Service Advanced Configuration Snippet" (safety valve in CM configuration for Hue): [desktop]
app_blacklist= You will also need to download, configure and start the Spark Job Server, which is not an official component of Spark at the moment - you can find all those details on that page as well. Good luck!
... View more
11-07-2013
09:30 AM
Let us know how that goes - I hope it works for you. Note that this is not a "package upgrade" in the strictest sense. libstdc++43 and libstdc++45 can both be installed at the same time (they both are on my machine) - so you won't be upgrading libstdc++43, you'll just be installing libstdc++45 as well.
... View more
11-05-2013
03:18 PM
1 Kudo
Could you also post the output of `zypper search libstdc++`. I'm able to run Impala on SLES SP1, and I have libstdc++45. It's possible you have an older version installed and Impala is requiring the latest one.
... View more
08-21-2013
10:06 AM
Some tools do not extract files larger than 2 GB correctly. If you have not already, try extracting the archive using 7-zip.
... View more
08-01-2013
02:04 PM
It sounds like you may have a typo in one of the file paths. If you see something similar to "bash: ./myreducer.py: No such file or directory" your typo is in the path or filename of the reducer script. But if you see "bad interpreter" in the error, it means the path you're using to point to python is incorrect. If you have a hard time finding a typo, try copy / pasting the output of "ls -l", your exact command and the exact output of that command, and possibly your scripts as well. In the Linux terminal windows, Ctrl + Shift + C and Ctrl + Shift + V can be used to copy and paste.
... View more
08-01-2013
10:07 AM
jp, Try inserting the header "#!/usr/bin/env python" as the first line in your scripts. This signals to the operating system that your scripts are executable through Python. If you do this in your local example (and do "chmod +x *.py"), it works without having to add python to the script: cat inputfile.txt | ./mymapper.py | sort | ./myreducer.py Copy the modified files back into HDFS and MapReduce will now be able to execute your mappers and reducers.
... View more
- « Previous
- Next »