Member since
06-26-2013
416
Posts
104
Kudos Received
49
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
7133 | 03-23-2016 08:06 AM | |
12438 | 10-12-2015 01:56 PM | |
4277 | 03-05-2015 11:11 AM | |
5800 | 02-19-2015 02:41 PM | |
11565 | 01-26-2015 09:55 AM |
09-06-2016
06:57 AM
Clint, Is there any guidance *against* using logrotate for a cluster, if it is installed? Thanks, Chris
... View more
08-15-2016
09:46 PM
SOLVED I have figured it out. While I was re-installing on my lab Red Hat Linux server 7.2 via: 1. $wget http://archive.cloudera.com/cm5/installer/latest/cloudera-manager-installtion 2. $chmod w+x cloudera-manager-installer.bin 3. $sudo ./cloudera-manager-installer.bin I was logged into Google via Firefox and a window popped up asking for a keyring password. So I used my Google password again to add the password to the keyring password section. I performed this task (which I did not the first time and everything was working on my initial dowload before I blew everything away and re-installed everything) on the second installation. When I got to the Oracle Code License agreement and selected yes. And the installation continued that is when I got the error: sh: line 1: 5514 Segmentation fault (core dumped) DEBIAN_FRONTEND=noninteractive yum -y install oracle-j2sdk1.7 > /var/log/cloudera-manager-installer/2.install-oracle-j2sdk1.7.log 2>&1 The solution: I deleted my Google account and closed out of Google mail on Firefox web browser and restarted the installation by followng the directions via the Cloudera website above and the installation started with no errors.
... View more
08-03-2016
06:58 AM
You can stop the services with the following: (another option is just to use restart, I like stop, verify all processes are gone/down then do the start when needed.) Cloudera Manager Server: (stop agent, stop the server, stop database if using local postgres default ) sudo /sbin/service cloudera-scm-agent stop sudo /sbin/service cloudera-scm-server stop sudo /sbin/service cloudera-scm-server-db stop Do the reverse to start: sudo /sbin/service cloudera-scm-server-db start sudo /sbin/service cloudera-scm-server start sudo /sbin/service cloudera-scm-agent start On the other nodes of your servers in the cluster just stop/restart the agent. sudo /sbin/service cloudera-scm-agent stop sudo /sbin/service cloudera-scm-agent start or sudo /sbin/service cloudera-scm-agent restart You can do the following to get a status of the services as well. sudo /sbin/service cloudera-scm-server-db status sudo /sbin/service cloudera-scm-server status sudo /sbin/service cloudera-scm-agent status May want to do a ps -ef |grep cloudera-scm to verify all processes are stopped on each server.
... View more
06-16-2016
08:05 PM
Can we monitor the namenode edits logs and use that to trigger file copy , continuously from one cluster to another.
... View more
05-12-2016
10:46 AM
1 Kudo
If you happen to be in the Austin area, you should consider participating in our Hackathon this coming Sunday May 15. Hosted at the Cloudera office and sponsored by Cloudera Cares.
The hackathon will focus on reducing mosquito borne virus infections by analyzing data on water, mapping mosquito travel, and historical virus analysis.
515 Congress Ave., Suite 1212, Austin, TX
10:30am to 8:00pm
RSVP and further details here.
... View more
03-02-2016
11:24 PM
1 Kudo
What action triggered the stacktrace? The stacktrace is from deep within Spring and suggests system level issue, e.g. out of memory. A few things to check: - server log (/var/log/cloudera-scm-server/cloudera-scm-server.log) - management daemon logs (/var/log/cloudera-scm-firehose/*.log - check "Hosts"->"All Hosts" for memory pressure. The "Resources" tab of individual Host page may help as well
... View more
01-12-2016
09:42 AM
3 Kudos
Symptoms
"Permission denied" errors can present in a variety of use cases and from nearly any application that utilizes CDH.
For example, when attempting to start the jobtracker using this command
service hadoop-0.20-mapreduce-jobtracker start
You may see this error, or one similar
org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4891)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4873)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4847)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3192)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:3156)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3137)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:669)
While the steps to reproduce this error can vary widely, the root causes are very well defined and you'll know you're suffering from this issue by finding the following line either on stdout or in the relevant log files:
org.apache.hadoop.security.AccessControlException: Permission denied: user=XXX, access=WRITE, inode="/someDirectory":hdfs:supergroup:drwxr-xr-x
Applies To
CDH (all versions), Mapreduce, HDFS, other services that rely on reading from or writing to HDFS
Cause
Access to the HDFS filesystem and/or permissions on certain directories are not correctly configured.
Troubleshooting Steps
There are several solutions to attempt:
1) The /user/ directory is owned by "hdfs" with 755 permissions. As a result only hdfs can write to that directory. Unlike unix/linux, hdfs is the superuser and not root. So you would need to do this: sudo -u hdfs hadoop fs -mkdir /user/,,myfile,, sudo -u hdfs hadoop fs -put myfile.txt /user/,,/,, If you want to create a home directory for root so you can store files in his directory, do: sudo -u hdfs hadoop fs -mkdir /user/root sudo -u hdfs hadoop fs -chown root /user/root Then as root you can do "hadoop fs -put file /user/root/". 2) You may also be getting denied on the network port where the NameNode is supposed to be listening:
Fix this by changing the address that the service is listening on in /etc/hadoop/conf/core-site.xml. By default your NameNode may be listening on "localhost:8020." (127.0.0.1)
So to be clear, implement this value for the following property:
<property> <name>fs.defaultFS</name> <value>hdfs://0.0.0.0:8020</value> </property>
then bounce the service with hadoop-hdfs-namenode restart optional: validate with netstat -tupln | grep '8020'
References
... View more
12-30-2015
05:27 AM
Hi Darren, We had a similar problem while installing Cloudera Manager 5.5. The cloudera-scm-server process was failing after a few seconds. The scm-server.out file indicated that the log4j file was not available. We uninstalled CM and tried re-installing it again; however, the problem persisted. As per your suggestion, we manually created a file log4j.properties and added the contents. We then 1) Stopped cloudera-scm-server-db 2) restarted postgresql 3) started cloudera-scm-server-db 4) started cloudera-scm-server and it worked!! We were immediately able to access CM via the browser. Thanks a lot for your help. Regards, Yogesh
... View more