Member since
04-22-2014
1218
Posts
341
Kudos Received
157
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 26871 | 03-03-2020 08:12 AM | |
| 16962 | 02-28-2020 10:43 AM | |
| 4908 | 12-16-2019 12:59 PM | |
| 4697 | 11-12-2019 03:28 PM | |
| 6998 | 11-01-2019 09:01 AM |
01-08-2018
10:32 PM
@Srinivs, In CDH, currently your hosts need to have lowercase letters in their fully-qualified hostnames. The change would be to your host's hostname (i.e. /etc/sysconfig/network, etc.) Depends on your OS and config
... View more
01-08-2018
10:24 PM
@desind, Sorry for the delay; I am catching up on some older community posts from before the holidays. The exception you see occurs when you have Active Directory external authenticaiton configured for Navigator authtentication, but Navigator cannot find trust for your LDAPS server's certificate. See the following documentation tidbit for information: https://www.cloudera.com/documentation/enterprise/latest/topics/cn_admcfg_auth_openldap.html#configuring_ldap_over_tls Trust for the LDAPS server's certificate signer is defined in the JDK that is used to run Navigator. If you have questions, please reply
... View more
01-07-2018
02:42 PM
1 Kudo
Hi @cloud123user, One of the advantages of having Cloudera distribute hadoop is that Cloudera will test for stability on certain OSes. CDH 5.8.3 was not tested on CentOS 7.3. This does not mean that it won't work and it probably will. The "supported" part here indicates that we cannot give the level of assurance to our users that we have tested and confirm that it should work. While 5.8.3 will likely work on Centos 7.3, we would recommend upgrading to a Cloudera Manager and CDH version that we have tested on 7.3 at your earliest convenience. -Ben
... View more
01-07-2018
02:31 PM
@digitalrinaldo, When adding hosts via the wizard, a script is executed as root on the new host that will install the necessary packages. If you have chosen in the wizard to have Cloudera Manager mange Java, it will also install the java packages from Cloudera's repository. By default, Java is not installed for you, so perhaps that may have happened? A box needs to be checked to have CM manage Java. At this stage, though, I recommend you install your own JDK 1.8 as the Cloudera Repository has JDK 1.7_67.
... View more
01-07-2018
02:22 PM
@Amir, Please provide us with your agent log showing the error. There are many reasons why the agent would not be able to connect to the supervisor, so we need to see the agent log information to determine what the cause may be. Thanks, Ben
... View more
01-07-2018
01:09 PM
@CTSEH1, Please confirm that you are seeing exactly the same LDAP problem. "No groups found for user..." errors can have many causes. We would need to see logs leading up to and including the error in order to understand if we are seeing exactly the same cuase. Ben
... View more
01-07-2018
01:08 PM
1 Kudo
@JoaoBarreto, Based on the stack trace and errors, you have HDFS configured for LDAP Group Mapping which means hadoop applications will resolve group membership via LDAP. The LDAP configuration is in your HDFS configuration. This group lookup is outside of kerberos completely. We see that the LDAP connection fails with "error code 49". This means that the Bind DN and Bind DN Password provided in the Cloudera Manager HDFS configuration for LDAP Group Mapping do not match what is in the LDAP server you have configured for those group lookups. Since the client cannot lookup groups, the group is not found and the operation fails with the error. To correct, confirm with your LDAP administrator that the user and password you have configured are correct. it is possible that the Active Directory user account you were using had its password changed if this configuration worked at some time in the past.
... View more
01-07-2018
12:26 PM
1 Kudo
@Tomas79, The Cluster has stale Kerberos client configuration message indicates that there was some configuration change in Cloudera Manager to your Kerberos configuration that resulted in a change to the managed krb5.conf file. I am not sure what the upgrade may have done, but it would be worth checking your Cloudera Manager configuration to see. Try going to Administration --> Settings and then click the History and Rollback link. See if there were any recent changes to your kerberos configuration. If you don't find anything conclusive, the following should clear this up: - stop CDH and Cloudera Management Service - copy aside one of your existing /etc/krb5.conf files (for later comparison) - From the cluster drop-down in the Cloudera Manager home page, choose Deploy Kerberos Client Configuration and deploy - After the deploy is complete, start Cloudera Management Service and CDH If the issue still occurs, let us know. You may also want to compare the previous and new /etc/krb5.conf files to see if there are difference. Not sure what happened to cause this situation, but the steps should help (as you suggested).
... View more
01-07-2018
12:13 PM
@Sats, There is no one solution for this. In order for us to know that you have exactly the same condition, please share screen shots and logs that show what helps you determine that you are seeing an issue where the agent cannot heartbeat to Cloudera Manager after starting. In the original post, the error was "no route to host". This indicates problems in the network or the OS network configuration. Before going further, we'll need to know what problem you observe to make sure our investigation and help is targted on the right issue. Thanks, Ben
... View more
01-03-2018
10:53 AM
@DataYogi, IPv4 is a function of your OS networking, so that is a matter for your host and network configuration. My point is that if you are unfamiliar with how database servers and other servers interact over IPv6, perhaps it would be best to only use IPv4 for now. See the following postgres information regarding addresses (including IPv6) https://www.postgresql.org/docs/9.3/static/auth-pg-hba-conf.html It appears you were missing the /64 subnet portion of the IP as your interface shows: inet6 addr: 2402:1f00:8001:281::/64 Scope:Global I believe either of the following in the pg_hba.conf file would allow access from that one host: host hue hue 2402:1f00:8001:281::/64 md5 or host hue hue 2402:1f00:8001:281::/128 md5 Unless you need to restrict access, you can add lines to allow access from any host that is IPv6 with the following: host hue hue ::0/0 md5 NOTE: Make sure you ensure there are no servers connecting to the embedded postgres database and restart from the command line with "service cloudera-scm-server-db restart" after making any changes to ensure they took effect.
... View more