Member since
01-20-2014
578
Posts
102
Kudos Received
94
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 6698 | 10-28-2015 10:28 PM | |
| 3576 | 10-10-2015 08:30 PM | |
| 5649 | 10-10-2015 08:02 PM | |
| 4107 | 10-07-2015 02:38 PM | |
| 2891 | 10-06-2015 01:24 AM |
03-01-2015
09:59 PM
Visit any service's configuration page. In the search box, type in "logging". This will show you all the locations where log files are written for this service. Make the changes you wish to and restart the roles. Ensure that every relevant host on the cluster has this new directory created and the filesystem permissions are correct.
... View more
02-26-2015
12:46 PM
Hi Greg Are you able to turn on API debugging and see if the deletion of repos actually worked? After this, restart CM. If you still see the loading of archive.cloudera.com please post the log here, maybe using pastebin
... View more
02-24-2015
04:23 PM
1 Kudo
Ah, my bad. Darren caught it. Cloudera Manager will write zoo.cfg to /var/run/cloudera-scm-agent/process/ (look for the latest director for zookeeper)
... View more
02-24-2015
04:22 PM
Did you run a rolling restart of the ZooKeeper service? I am also not sure if Cloudera Manager copies the data from the old to the new location, that is something you might have to take care of manually as well
... View more
02-23-2015
09:29 PM
1 Kudo
You could install CDH on Red Hat 6.6 and it might just work, however there might be a few problems here and there. If you logged a support case, we'd be hesitant to provide advice as we don't yet test against that release.
... View more
02-23-2015
05:54 PM
When an NN is to made primary, the ZKFC will write to the znode under /hadoop-ha. If this cannot be done for whatever reason, the NN cannot be made active. This is likely an issue with ZooKeeper and the ZKFC. Check the ZKFC logs (on the same host as the NN), it should provide a clue. If there's nothing there, check if the ZooKeeper hosts are responding correctly (is the service up?)
... View more
02-22-2015
02:32 AM
Glad to know it is resolved now. You can try the same procedure on all your cluster nodes where /usr/bin/hadoop is not symlinked correctly.
... View more
02-21-2015
10:13 PM
3 Kudos
That does tally with what I see. Are you able to - verify that the file "/opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/bin/hadoop" actually exists - restart the Cloudera Manager agent # service cloudera-scm-agent restart - check if the /usr/bin/hadoop symlink has been created
... View more
02-21-2015
01:06 PM
Can you paste the contents of /var/lib/alternatives/HADOOP here?
... View more
02-20-2015
09:12 PM
1 Kudo
Check what /etc/alternatives/hadoop points to, most likely it would be to an unavailable parcel. The simplest way to resolve this is - check all files under /var/lib/alternatives for references to invalid parcels - delete those references, ensuring the reference to 5.3.1 is present and is the first option - restart the Cloudera Manager agent, this will set up alternatives again
... View more