Member since
07-29-2013
62
Posts
19
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
985 | 10-28-2014 10:22 AM | |
3280 | 09-19-2014 10:26 AM | |
2874 | 06-11-2014 01:47 PM | |
6077 | 06-11-2014 05:46 AM | |
2626 | 11-04-2013 06:58 AM |
01-21-2015
09:25 AM
The error "error parsing conf core-default.xml" seems to suggest that your hadoop client configuration is malformed. Can you go through your hadoop configuration and ensure that it's all proper XML please?
... View more
10-28-2014
10:22 AM
Hi Isra, You only need the el6 parcel on your CM server box. Cloudera Manager will however download the parcels for you so I am not sure what's happening. May be you will find something in the scm server logs under /var/log/?
... View more
09-22-2014
10:27 AM
1 Kudo
Sorry, I missed that. Let me poke around a bit.
... View more
09-22-2014
10:11 AM
1 Kudo
That means that you need to have CDH installed on your cluster and that version of CDH should be atleast CDH 5.1.0. Can you please verify that's the case?
... View more
09-19-2014
10:26 AM
1 Kudo
I am stretching the limits of my knowledge of CSD guts but I am pretty sure CSDs can't do that. They are only for managing services. Any code that you want to be deployed will have to be in your parcel. It's fairly straightforward to create a parcel. It's a tarball with some additional metadata in it. Here are a few good starting points: http://blog.cloudera.com/blog/2013/05/faq-understanding-the-parcel-binary-distribution-format/ https://github.com/cloudera/cm_ext https://github.com/cloudera/cm_ext/wiki/Building-a-parcel https://github.com/cloudera/cm_ext/wiki/The%20parcel%20format (Optional) https://github.com/cloudera/cm_ext/wiki/The-parcel-repository-format
... View more
09-19-2014
09:58 AM
Hi Ramana, What UI code are you referring to? Assuming you are referring to the Spark Master UI (and not the Cloudera Manager UI), that code is a part of Spark binaries that are delivered as a part of the CDH parcel.
... View more
09-19-2014
08:42 AM
Hi Ramana, The Spark Master Web UI is from the Spark project. The CSD doesn't have the code for it. If you want a similar UI, you are probably better off looking at how say the Hadoop Namenode UI or the Spark master UI is written.
... View more
09-02-2014
04:40 PM
Hi mshirley, No, don't reinstall your OS, this should be a fixable problem. Parcel symlinks are usually created when you activate or deactivate the parcel. The default priority, however, is intentionally kept low. It's possible at some point, an alternative got created (manually by someone, or a by bug) at a higher priority, or, deactivation for a previous parcel was not done properly which may have left a lingering alternative. If I were you, I'd ensure all deactivate all CDH parcels. Then, I'd look at the alternatives and see if there is a symlink from /opt/cloudera/parcels/CDH -> /opt/cloudera/parcels/ exists. If so, it should NOT exist. You should go ahead, and deactivate that alternative. Ensure no such alternatives (and hence symlinks exist). Once you have done that, activate the parcel that you want via the CM UI. If you run into a similar issue again, please do provide the steps you undertook to reproduce the situation. Sorry about the inconvenience. Mark
... View more
07-16-2014
09:21 AM
When using CM, you should manage your services via CM as well. The client configuration can be found under /etc/hadoop/conf but the configuration used by various services can be different and is visible via the CM web interface.
... View more
07-02-2014
07:52 AM
When using Cloudera Manager, you should start and stop services via the CM API (by adding a service for your component, etc.) not via the Linux service commands.
... View more
06-20-2014
10:47 AM
Balakumar, Can't say much except the things you likely already know (sorry!): * Make sure there are no cm4 repos * run yum clean all or similar * if using CM installer, ensure you are using the correct version.
... View more
06-19-2014
10:55 AM
Hi Bogolese, I am definitely not a matter of authority when it comes to this but perhaps, you need to re-deploy MR1 client configuration. Have you tried that? Click on the MR1 service, then go to Actions->Deploy Client Configuration on the top right hand side of the screen.
... View more
06-18-2014
01:30 PM
Exactly. All I was trying to do say that removing /var/lib files will format your namenode. But there are definitely more files from many different components (from CDH and Cloudera Manager) that put files in /var/lib so formatting namenode is not necessarily going to help (and it's going to delete all your HDFS data). In fact, looking at the sizes of content in that directory, it's likely not going to help.
... View more
06-17-2014
01:36 PM
/var/lib is usually used to store the state of the system. So, for example, if you have namenode running on a machine, the metadata for the namenode is written in that directory. Formatting the namenode will clean out a subdirectory of /var/lib, so in general, it's not a good idea to delete those files. You should look a little more deeply into what's making that directory fill up. If they look like logs, it's likely ok to delete them but most of that directory contains things you don't want to delete from a functioning cluster.
... View more
06-16-2014
10:33 AM
Sounds like a problem with the mirror with the centos mirror. Are you able to reach the URL directly from the machine (say via ping/wget/etc.)? If so, try again perhaps?
... View more
06-11-2014
01:47 PM
Sure thing. You mentioned "CM 4.6 was installed, but I installed it before I started this process." I am assuming you meant uninstalled it? I am not sure how kosher it is to have two major CM versions running on the same box, probably not kosher at all. If you did uninstall, I think the uninstall may not have been clean. Can you perhaps, cd into /etc/yum.repos.d and make sure there are no list files there from cm4/cdh4 time? And, if so, move them out of there, do a yum clean all and try the installer again. I personally tried downloading the installer from Cloudera Express and it sure looks like it's the CM5 installer, doing the right thing and installing CM5.
... View more
06-11-2014
01:06 PM
Hi Bogolese, Sorry to hear! Something seems off. If you notice the version of packages it's trying to install is still from CDH4 " 4.6.0-1.cm460.p0.140". Are you upgrading an existing cluster? Or, are you starting from scratch? Especially, if the latter, may I recommend a few debugging steps: 1. Let's make sure there are no old lingering packages from the past rpm -qa | grep cloudera will show you all packages with cloudera in the name, which should be good enough for now. 2. Let's run 'yum clean all' to make sure all the repo information is up-to-date. 3. You would need the cloudera manager repo for CDH5 under /etc/yum.repos.d but how that actually ends up there depends on the exact method of installation. So, let's not worry about that for now but just an fyi. Let me know how it goes. BTW, you shouldn't need to install JDK7 by yourself, so something does seem off.
... View more
06-11-2014
05:46 AM
2 Kudos
Hi Kevin, Doing yum remove hadoop hue-common 'bigtop-*' sqoop2-client should remove all CDH packages.
... View more
05-22-2014
09:26 AM
1 Kudo
Hi Annamalai, You can install Cloudera Manager for free as a part of Cloudera Express which has very nice charting and monitoring capabilities in addition to easier deployment, upgrades, service and security management.
... View more
05-22-2014
08:53 AM
1 Kudo
Hi ATP, All of Cloudera's distribution, CDH is open source, licensensed under Apache Software License, v2. In terms of support offerings, here is a matrix that may be helpful: http://www.cloudera.com/content/cloudera/en/products-and-services/product-comparison.html Please let us know if you have any further questions.
... View more
03-03-2014
02:46 PM
Hi Steve, By default the first user that logs into Hue becomes the first admin user. So, feel free to pick them as per your liking. Here are some more details: http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/4.2.0/Hue-2-User-Guide/hue29.html
... View more
02-06-2014
07:04 AM
1 Kudo
Sorry to hear about the removal of other packages but Clint is exactly right. To pile on, zookeeper is a fairly fundamental component of the stack. For example, the hadoop package depends on zookeeper so removing zookeeper will remove hadoop and all other hadoop packages (which transitively depend on hadoop). Like Clint said, the state is always stored on the system, so if you were to reinstall hadoop (and without doing any reformat of HDFS), you should be able to continue from where you left.
... View more
11-04-2013
06:58 AM
1 Kudo
Hi qwert, Thanks for posting. The docs you are referring to you are Apache Hive docs. The release number they are referring to is from Apache Hive as well. That is to say, that the first release of Apache Hive that contains Hive Server2 is Apache Hive 0.11. In your case, the version of Hive you see is CDH4.3.0's 0.10.0+134. The version 0.10+134 means that CDH4.3 contains Apache Hive 0.10 and 134 patches applied on top of it. Since CDH, Cloudera's Distribution Including Apache Hadoop, is 100% open source, each and every one of these patches is from Apache Hive. These are changes that were not in Apache Hive 0.10 but were committed later upstream that Cloudera decided to include, integrate and test as a part of its distribution. One of those 134 patches corresponds to the Hive Server2 and hence Hive Server2 is a part of and is supported in CDH 4.3.0. In fact, the first Cloudera release that included Hive Server2 was CDH4.1.0. If you have any particular issues with Hive Server2, please file a separate issue. We will be happy to work with you on that. Moreover, if you plan on using other features in Apache Hive 0.11, you can use, CDH 5 beta1 which contains Hive 0.11.0 and 483 patches applied over it. http://blog.cloudera.com/blog/2013/10/cloudera-enterprise-5-beta-is-now-available-for-download/
... View more
09-18-2013
03:58 PM
Thanks, will do!
... View more
09-18-2013
01:29 PM
Hi! Thanks for posting. The docs of mahout are bundled in a separate package (to reduce the cruft for people who don't care about downloaded docs). For that you can install the mahout-doc package using 'sudo apt-get install mahout-doc'. The contents should go under /usr/share/doc/mahout* and you should be able to pick it up from there. Please let me know if you have any further questions. And, good luck!
... View more
08-22-2013
10:35 AM
Sergey, From the looks of it, it seems like the hive-metastore service is not running. I am assuming you are running a remote metastore. Can you make sure it's up and running? Perhaps, telnet to the machine it's meant to running on, on port 9083 to see if all the services are up, listening and the ports are open.
... View more
08-21-2013
11:35 AM
1 Kudo
While it's possible to install hcatalog and related packages via yum and using the rest of the CDH stack from the parcel, I would recommend not mixing and matching packages and parcels. Hcatalog is doesn't come as a separate parcel, it's present in the CDH parcel. As far as WebHcat goes, it's a "role" of the Hive service. So, you should be able to go to Hive service and add the WebHcat role. Please let me know if you aren't able to find it (or if it's hard to find). Feedback welcome!
... View more
08-20-2013
10:00 AM
The good thing is that you created an external table, so you can just delete the table and recreate it. The underlying data in HDFS (/user/test/...) wouldn't be deleted.
... View more