Member since
04-22-2014
1218
Posts
341
Kudos Received
157
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 26238 | 03-03-2020 08:12 AM | |
| 16375 | 02-28-2020 10:43 AM | |
| 4707 | 12-16-2019 12:59 PM | |
| 4470 | 11-12-2019 03:28 PM | |
| 6652 | 11-01-2019 09:01 AM |
12-29-2016
10:22 AM
@zhuw.bigdata, I opened two internal Cloudera Jiras to make sure we specify that the fully-qualified domain name be used if Kerberos is enabled in the cluster. One Jira targeted the description in the HA wizard, the other Jira focused on the steps listed in our the documentation. Thanks for bringing this up! Cheers, Ben
... View more
12-22-2016
02:25 PM
1 Kudo
All, The resolution to this error is to enable the HDFS HA Enabling. Thank everyone for helping it. 1- You need to pay attention to Failover Controller (FC) already exist on the nodes that you assign to be active and standby for HDFS HA. Basically remove FC from these nodes before doing the HDFS HA Enabling. 2- Have your JournalNodes Edits Directory set up. Usually it is in /var/lib/jn Once the HDFS HA is enable, you can verify it by doing from Cloudera Manager - HDFS - Instances - Federation and High Availability <- click on it to see the setup or -HDFS - Configuration -<Do a search on nameservice> In filed NameNodes Nameservice, you should see all nodes that you assign in HDFS HA.
... View more
12-09-2016
08:22 AM
1 Kudo
You should be fine. By design, Cloudera Manager does not remove any data from CDH. To rebuild, you would basically add the services that you had before and choose the same locations for data that you had previously. As mentioned, you could certainly use the 6-month-old database, too... CM will upgrade it the first time it starts. Either way, your HDFS will not have been touched by the process of reinstalling Cloudera Manager and readding the services. You will need to regenerate credentials after configuring Kerberos as the keytabs are stored in Cloudera Manager's database. That will also not impact data, but it is another task you will need to perform. Ben
... View more
12-08-2016
03:41 PM
1 Kudo
It appears that the hostname configured for db access may be incorrect: Caused by: java.net.UnknownHostException: ODC-HADOOP-MN at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:901) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1293) at java.net.InetAddress.getLocalHost(InetAddress.java:1469) ... 48 more But then we also see: Caused by: java.sql.SQLException: Schema version table SCHEMA_VERSION exists but contains no rows. at com.cloudera.enterprise.dbutil.DbUtil.getSchemaVersion(DbUtil.java:238) at com.cloudera.enterprise.dbutil.DbUtil$1SchemaVersionWork.execute(DbUtil.java:177) at org.hibernate.jdbc.WorkExecutor.executeWork(WorkExecutor.java:54) at org.hibernate.internal.SessionImpl$2.accept(SessionImpl.java:1982) at org.hibernate.internal.SessionImpl$2.accept(SessionImpl.java:1979) That indicates that something went wrong during initial population of the database and now it is inconsistent. I would recommend starting over and also sharing the scm_prepare_database.sh command and options you used.
... View more
11-30-2016
08:28 AM
I was still stuck with not knowing the HUE_DATABASE_PASSWORD You can get it in cleartext from the CM REST API. Make sure you're logged in to CM with admin rights. http://:7180/api/v3/clusters//services//config?view=full HTH Bramd
... View more
11-26-2016
06:57 AM
No one stopped or uninstalled the agent manually because I'm the only one that manages that server. What I did that day was reinstall a MySQL server, I don't know if that is related with this issue. Running cloudera-scm-agent seems that is was uninstalled: Failed to start cloudera-scm-agent.service: Unit cloudera-scm-agent.service failed to load: No such file or directory. So I reinstalled the agent and now is working. Thanks
... View more
11-23-2016
04:46 PM
The certificate_unknown message is received as an alert from the caller initiating the TLS session. Generally, that means that the client making a connection to the server did not trust the certificate. To find out who is really not trusting the NameNode certificate, check anything that connects to the NameNode. Mostly, it is the DataNodes who need to hearbeat in I think. Check your DataNode logs to find out if you get exceptions regarding trust when they attempt to make a connection. For Resource Manager, check the NodeMangers' logs too. Once you have reviewed the logs, you will likely have a better idea of what is going on. You mentioned that "/var/lib/hadoop-hdfs/certs" is your truststore. If it is, it should contain the certificate for every host in your cluster. Also, make sure you have configured a path to it in the Service Configuration in HDFS: "Cluster-Wide Default TLS/SSL Client Truststore Location"
... View more
11-19-2016
08:11 AM
A web search for "appliance cloudera" does come back with a few other hits like Teradata and Dell. Whether they have been certified with multihomed configurations is not information that is generally available
... View more