Member since
04-22-2014
1218
Posts
341
Kudos Received
157
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 26248 | 03-03-2020 08:12 AM | |
| 16395 | 02-28-2020 10:43 AM | |
| 4716 | 12-16-2019 12:59 PM | |
| 4472 | 11-12-2019 03:28 PM | |
| 6657 | 11-01-2019 09:01 AM |
12-21-2016
03:36 PM
That's great! Nice detective work.
... View more
12-21-2016
01:38 PM
This sounds more like a server-side exception. I recommend checking the Oozie logs for exceptions being thrown when attempting to access the UI via load balancer. The exception should hopefully shed some light on what is happening. You could shut down one Oozie instance to ensure you know which log to look at.
... View more
12-21-2016
01:02 PM
Can you share the full error? What is the URL you used to try to access the UI?
... View more
12-21-2016
12:46 PM
You can check in Administration --> Security Click on "Kerberos Credentials" You can search for the hostname you entered as the proxy to view the credentials that are stored in Cloudera Manager Cloudera Manager will automatically merge the keytabs and lay down the proper keytab in the oozie process directory at the time it is started. You can do a klist on the file. You can see the latest process directory by running: ls -lrt /var/run/cloudera-scm-agent/process |grep OOZIE -Ben
... View more
12-20-2016
04:12 PM
See the error: Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: HA is not enabled for this namenode.
at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.create(DFSZKFailoverController.java:130)
at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:186) You can't do automatica failover to a secondary namenode. YOu would need to enable HA to get that... http://www.cloudera.com/documentation/enterprise/latest/topics/cdh_hag_hdfs_ha_enabling.html
... View more
12-20-2016
10:56 AM
supervisor.conf ususally only is read/write for owner (root) since it is used by the supervisor to start the process. Have you configured the agent to run as a user other than root? What are the file permissions listed in your FAILOVERCONTROLLER dir?
... View more
12-16-2016
08:47 AM
Since the exception is 2016-09-23 10:33:39,098 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage java.io.FileNotFoundException: /dfs/nn/current/VERSION (Permission denied) at java.io.RandomAccessFile.open(Native Method) the NameNode cannot start due to an inability to load fsimage. fsimage cannot be loaded since there is no VERSION file (hdfs user cannot see it). I would check permissions on your HDFS local disk directories on the NameNode. To resolve the issue in the exception, make sure that the VERSION file is owned by "hdfs" user... like this: -rw-r--r-- 1 hdfs hdfs 172 Nov 7 14:37 /dfs/nn/current/VERSION I hope that is the only issue; fixing this may lead to other issues due to permissions if something happened. If the owner of the file is shown as a number, that would indicate the OS cannot resolve the file's owner id with a user.
... View more
12-09-2016
08:22 AM
1 Kudo
You should be fine. By design, Cloudera Manager does not remove any data from CDH. To rebuild, you would basically add the services that you had before and choose the same locations for data that you had previously. As mentioned, you could certainly use the 6-month-old database, too... CM will upgrade it the first time it starts. Either way, your HDFS will not have been touched by the process of reinstalling Cloudera Manager and readding the services. You will need to regenerate credentials after configuring Kerberos as the keytabs are stored in Cloudera Manager's database. That will also not impact data, but it is another task you will need to perform. Ben
... View more
12-08-2016
03:41 PM
1 Kudo
It appears that the hostname configured for db access may be incorrect: Caused by: java.net.UnknownHostException: ODC-HADOOP-MN at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:901) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1293) at java.net.InetAddress.getLocalHost(InetAddress.java:1469) ... 48 more But then we also see: Caused by: java.sql.SQLException: Schema version table SCHEMA_VERSION exists but contains no rows. at com.cloudera.enterprise.dbutil.DbUtil.getSchemaVersion(DbUtil.java:238) at com.cloudera.enterprise.dbutil.DbUtil$1SchemaVersionWork.execute(DbUtil.java:177) at org.hibernate.jdbc.WorkExecutor.executeWork(WorkExecutor.java:54) at org.hibernate.internal.SessionImpl$2.accept(SessionImpl.java:1982) at org.hibernate.internal.SessionImpl$2.accept(SessionImpl.java:1979) That indicates that something went wrong during initial population of the database and now it is inconsistent. I would recommend starting over and also sharing the scm_prepare_database.sh command and options you used.
... View more
12-08-2016
11:23 AM
Uninstalling/reinstalling the embedded database should not remove or overwrite any data as far as I know. You are saying that /var/lib/cloudera-scm-server-db has all new files dated from the day you reinstalled? I'd check to see what is in /etc/cloudera-scm-server as I mentioned. I'm wondering if you may have been using a different db/directory somehow... If not, there really isn't much else to do but rebuild your Cloudera Manager configuration from scratch or try using the backup you have. The backup will likely work OK. When Cloudera Manager starts (using that old db) it will upgrade it as necessary. Ben
... View more