Member since
04-22-2014
1218
Posts
341
Kudos Received
157
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 26271 | 03-03-2020 08:12 AM | |
| 16423 | 02-28-2020 10:43 AM | |
| 4727 | 12-16-2019 12:59 PM | |
| 4477 | 11-12-2019 03:28 PM | |
| 6682 | 11-01-2019 09:01 AM |
11-06-2018
03:27 PM
Hi @VijayM, Without seeing the configuration you have, it is hard to say what is correct. Perhaps you can share and we can see if there is something obvois. I would strongly suggest looking at the HAProxy logs an the HiveServer2 logs when the problem happens to look for any TLS errors or related messages.
... View more
10-22-2018
09:47 AM
1 Kudo
@manjj, If you lost your database and then reinstalled CM, the agents will not complete the heartbeat to the new CM since the cm_guid does not match the value in CM. To correct this, on all hosts with agents running: - # rm /var/lib/cloudera-scm-agent/cm_guid - # service cloudera-scm-agent restart I think the reason you are seeing those errors in the parcels page is because the agents are in bad health... due to the cm_guid. the cm_guid is generated by CM and the agent stores it to make sure the agent does not communicate with a CM / database that is unexpeted. The process of removing it will allow the agent to see that it should now accept communication with the new CM server/db that you have.
... View more
10-19-2018
05:10 PM
@VeljkoC, The following shows the cause of the problem: 2018-10-18 10:53:32,694 ERROR hive.log: [pool-5-thread-3]: Got exception: org.apache.hadoop.security.AccessControlException Permission denied: user=hue, access=EXECUTE, inode="/user":cloudera-scm:supergroup:drwxrwx--- The bold text above shows that an attempt was made by the Metastore to access HDFS but the required permission for the user was not available. In this case, hue user (used by Service Monitor) needs EXECUTE privilege on /user We can see that this does not exist: cloudera-scm:supergroup:drwxrwx--- The default permission on /user is drwxr-xr-x I believe so in your cluster it may have been changed for some reason. In order to allow for listing of files in /user you will need to add EXUCUTE for other. For example the result should look like this: drwxr-xr-x - hdfs supergroup 0 2018-08-27 10:19 /user That should help.
... View more
10-19-2018
01:38 PM
@Kamlesh, If Diagnostic Data Bundle Directory has not been configured in Cloudera Manager (Administration --> Settings --> Support) then the bundle file will be stored in your Java temp directory which defaults to /tmp. Check /tmp for a file that contains the string scm-command-result. For example, on my Cloudera Manager host, I see: /tmp/9245-scm-command-result-data3965754000660825578.zip
... View more
10-17-2018
03:28 PM
4 Kudos
@JoaoBarreto, Currently there is no way to automate this in Cloudera Manager, but it is possible via manual configuration for each service, the agents, and Cloudera Manager (as well as the shell if you are using hadoop commands at the command line). I would like to do some extensive testing at some point, but, for now, you can use the following: Background: Java will use the following configuration if set for that JVM: -Djava.security.krb5.conf=/custom/path/to/krb5.conf If java.security.krb5.conf is not set, then java will look in the following locations: - /path_to_jdk/jre/lib/security/krb5.conf - /etc/krb5.conf MIT Kerberos-based servers can be configured with the following environment variable: KRB5_CONFIG=/custom/path/to/krb5.conf With the above rules in mind, these general steps can be followed: (1) Place your custom krb5.conf in the "jre/lib/security" subdirectory of your JDK's directory. Make sure it is read for all in terms of file permissions (so that all service users can read from it) Any client or server that uses that JDK will then automatically read from your customer krb5.conf rather than /etc.krb5.conf * this includes Cloudera Manager NOTE: The draw-back of doing it this way is that if you upgrade Java, you will need to remember to put your krb5.conf in place. NOTE2: If you choose to use -Djava.security.krb5.conf instead, that will require configuring it for all servers and clients in safety valves, files, etc. The plus of this config, though, is that you do not have to remember to put your krb5.conf in place during upgrades of JDK. (2) For all agents in your cluster, add this to /etc/default/cloudera-scm-agent: export KRB5_CONFIG=/custom/path/to/krb5.conf (3) Add the following to Hue Service Environment Advanced Configuration Snippet (Safety Valve) KRB5_CONFIG=/custom/path/to/krb5.conf (4) Add the following to Impala Service Environment Advanced Configuration Snippet (Safety Valve) KRB5_CONFIG=/custom/path/to/krb5.conf You may also need to add to Impala Daemon Environment Advanced Configuration Snippet (Safety Valve): JAVA_TOOL_OPTIONS="-Djava.security.krb5.conf=/opt/krb5.conf (5) Restart EVERYTHING (cluster, management service, agents, Cloudera Manager) That should give you a good start.
... View more
10-17-2018
11:11 AM
@Gayathri68, You need to remove the pid file most likely. You'll need to find out where the pid file is stored. Use "strace" to see where it is finding the file if you are on a system where strace can be run Rundeck is no a Cloudera product, though, so you might check here: https://rundeck.org/help.html
... View more
10-17-2018
11:04 AM
@VijayM, If you are using TLS passthrough, then you don't need to configure certificates fo HAProxy as the TLS handshake is done with the HS2 servers themselves. This does add some extra work for you, though, as it means that you need to be sure that the hostname(s) in the HS2 server certificates match the name of your HAProxy host. This can be done in a few ways, such as issuing a server certificate that contains SubjectAltName value equal to the HAProxy host's fully-qualified domain name or you could use a wildcard that matches the domain. If you are using TLS termination where the client will do the TLS handshake with HAProxy and then can either do TLS or non-TLS connections to backend servers. In this case, HAProxy will decrypt the incoming request and then re-encrypt it if your HS2 servers are listening on TLS ports. In that case, you do have to specify a server certificate for HAProxy's frontend and you need to use a trust store to trust the signer of the HS2 certificates. There is information out there, but this page (dispite a few mistakes) is pretty good talking about each: https://serversforhackers.com/c/using-ssl-certificates-with-haproxy An example of pass-through is one I'm using on my server: frontend hiveserver2_front bind *:10015 ssl crt /etc/cdep-ssl-conf/CA_STANDARD/cert_key.pem mode tcp option tcplog default_backend hiveserver2 backend hiveserver2 balance source mode tcp server hiveserver2_1 tls12-1.example.com:10000 ssl ca-file /etc/cdep-ssl-conf/CA_STANDARD/truststore.pem server hiveserver2_2 tls12-4.example.com:10000 ssl ca-file /etc/cdep-ssl-conf/CA_STANDARD/truststore.pem server hiveserver2_3 tls12-2.example.com:10000 ssl ca-file /etc/cdep-ssl-conf/CA_STANDARD/truststore.pem NOTE: in the above, I have mode tcp set which means I'm using passthrough (no http header evaluation and therefore no need to decrypt) Since I have server and truststore files configured, though, I could switch to mode http and do termination at the HAProxy. I'm no HAProxy expert, but I am pretty sure the above should help you.
... View more
10-17-2018
09:49 AM
@VeljkoC, Cloudera Manager performs health tests to ensure that CDH servers are running properly. One of these tests is creating and dropping a database in the Hive Metastore. The health alert you are seeing indicates that that process failed. Places to look for more information about the failure: - Service Monitor (which issues the health check): /var/log/cloudera-scm-firehose/mgmt-cmf-mgmt-SERVICEMONITOR* you might search for the word "canary" or "hive" in that file. - Hive Metastore log file /var/log/hive/hadoop-cmf-HIVE-1-HIVEMETASTORE* Check for the word "canary" or look for WARN and ERROR messages pertaining to database names that include the string "canary".
... View more
10-17-2018
09:12 AM
2 Kudos
@chriswalton007, The following pages shows supported upgrades: https://www.cloudera.com/documentation/enterprise/upgrade/topics/ug_upgrade_paths.html As you have identified, it is not supported to upgrade from 5.15x to 6.0.x This is due to the way we had to maintain fixes and features in branches of 5 and 6. Don't worry, though, as you only need to wait a while longer for 6.1 at which point you can upgrade to 6.1. Only 5.15.x to 6.0.x is blocked. We are targeting 6.1 to be released by the end of the year, but that is a rough estimate. For now, you can keep up with the latest 5.x release and start reading the C6 upgrade documentation. There are some major changes all over CDH and for Solr the upgrade path is complex, so it is a good idea to start planning if you are using most of the CDH products.
... View more
10-15-2018
08:37 AM
1 Kudo
@wert_1311, (1) Activity Monitor is only useful if you are using MapReduce1. I doubt you are, so you can just delete Activity Monitor if that is true. If you are using YARN for jobs, then you can remove Activity Monitor. (2) In the instructions, you need to copy the driver to /user/share/java. Please list the contents so we can see if the cloudera-scm user has read access: # ls -la /usr/share/java You might compare permissions with the old, working hostl to see that the same directory and containing files have the same permissions. The assumption here is that it is likely that Java just cannot access the mysql driver.
... View more