Member since
01-20-2014
578
Posts
102
Kudos Received
94
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5724 | 10-28-2015 10:28 PM | |
2725 | 10-10-2015 08:30 PM | |
4749 | 10-10-2015 08:02 PM | |
3543 | 10-07-2015 02:38 PM | |
2343 | 10-06-2015 01:24 AM |
09-26-2014
01:48 AM
Please provide the exact error messages so we can assist you. You can paste the log file to pastebin and provide the URL here if you wish
... View more
09-25-2014
09:00 AM
I don't have a QQ account. Please use a service like pastebin that doesn't need us to sign in.
... View more
09-25-2014
04:06 AM
postgresql-server is in the updates repo The repo http://vault.centos.org/5.7/updates/x86_64/RPMS/ contains postgresql84-server-8.4.9-1.el5_7.1.x86_64.rpm This will add a repo # cat > /etc/yum.repos.d/centos57-updates.repo <<EOF [updates] name=Updates baseurl=http://vault.centos.org/5.7/updates/\$basearch/ gpgcheck=0 <<EOF Then install postgresql84-server # yum makecache # yum install postgresql84
... View more
09-22-2014
07:54 PM
"HDFS Under replicated blocks" implies that some blocks are not duplicated enough to satisfy the default replication factor of 3. If possible consider setting up clusters with at least 3 nodes. "Missing Blocks" implies the datanodes which had block before shutdown now don't have it when they booted up. This could happen with the Instance Store. What kind of storage did you use on the nodes? This is explained here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Storage.html When you run "hadoop fsck -delete" you are telling the namenode to delete files whose blocks cannot be located. This is fine for temporary files. Before running it however you should run "hdfs fsck -list-corruptfileblocks", identify the reason why the blocks are missing. If the blocks are recoverable, you won't have to delete the files themselves. "could only be replicated to 0 nodes, instead of 1" could mean the datanodes are not healthy. Check the datanode logs under /var/log/hadoop-hdfs on both nodes to see what the problem might be. If it's not clear, paste the relevant parts to pastebin and give us the URL
... View more
09-21-2014
10:45 PM
If you check the script /usr/share/cmf/bin/cmf-server on the host running Cloudera Manager, you will see some checks for existing JDK locations. One of them is /usr/java/jdk1.7* which matches your ls listing. re you able to run the "bin/java" program from that location and see if it prints the right version? # /usr/java/jdk1.7*/bin/java -version
... View more
09-21-2014
06:32 PM
To verify if Oracle Java was installed correctly, could you provide: # rpm -qa | grep oracle # ls -l /usr/java
... View more
09-20-2014
08:18 PM
This is because the root user is not valid within HDFS. Try running the command prefixed with "sudo -u hdfs" which runs the command as the hdfs user
... View more
09-20-2014
08:59 AM
Is /pkg/moip/mo10755/work/mzpl/cloudera/parcels or any of the directories in between on a remote host or a separate mount? Do any of the mounts have a noexec option? Run the mount command and see the flags set Check the #! line in the script, what shell does it refer to? Is that program present on your box? What user are you running the test as? 766 implies "other" cannot execute the script. To verify if the cloudera-scm user is valid, run the /bin/id command on it (as root) # /bin/id cloudera-scm Does the uid and gid match that in /etc/passwd?
... View more
09-20-2014
08:44 AM
Did you try executing the script standalone, does it work? /pkg/moip/mo10755/work/mzpl/cloudera/parcels/CDH-5.1.2-1.cdh5.1.2.p0.3/meta/cdh_env.sh
... View more
09-20-2014
08:29 AM
All those parcel directories should be manageable by Cloudera Manager, should be owned by cloudera-scm:cloudera-scm
... View more