Member since
04-22-2014
1218
Posts
341
Kudos Received
157
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
18906 | 03-03-2020 08:12 AM | |
10140 | 02-28-2020 10:43 AM | |
3044 | 12-16-2019 12:59 PM | |
2323 | 11-12-2019 03:28 PM | |
4116 | 11-01-2019 09:01 AM |
03-27-2018
04:25 PM
@sandy05, No, not usually and that is why we didn't code to add sleep there. To be honest, I don't know the history of the need for the "sleep" in some OSes and not others. Indeed, it has not been needed for el6 as far as I know. Based on your report of the issue, though, the situation usually ends up being resolved (in Cloudera internal cases) by inserting a sleep of 1 second. If that doesn't help, let us know and share with us the edited file so we can verify the change. Ben
... View more
03-27-2018
04:19 PM
Hi @dewdrop, Please see my initial comments in this thread. We'll need to know more about what you try to do and what the result is. Thanks, Ben
... View more
03-27-2018
10:32 AM
1 Kudo
@sandy05, This is a tricky one, but, in the past, this sort of issue was resolved by adding a 1 second sleep to the import script. (1) Back up the following file: /usr/share/cmf/bin/import_credentials.sh file on your Cloudera Manager host. (2) Edit /usr/share/cmf/bin/import_credentials.sh on your Cloudera Manager host Locate this text near the top of the file: # Determine if sleep is needed before echoing password. # This is needed on Centos/RHEL 5 where ktutil doesn't # accept password from stdin. SLEEP=0 (3) Change: SLEEP=0 to: SLEEP=1 (4) Try using Cloudera Manager to import credentials again. We have observed from time to time that timing in the "addent" commands in the script will lead to this sort of issue. Adding some sleep has resovled it in the past. Regards, Ben
... View more
03-27-2018
08:49 AM
1 Kudo
@balajivsn, The important error we see is that the "KDC has no support for encryption type..." We see that the script output is attempting to use rc4-hmac, but the KDC you have does not contain a key with that encryption type for the user "admin/admin@HADOOP.COM" In Cloudera Manager --> Settings --> Kerberos "Kerberos Encryption Types" field, make sure you choose only those encryption types supported by your KDC. To see what encryption types are supported by your MIT KDC, you can try looking at your kdc.conf. By default it is generally located in /var/kerberos/krb5kdc/ For more information, see "https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/kdc_conf.html" You can also log into your KDC and run "getprinc <principal> to view the encryption types for that principal. Once you have either created a key for that user with rc4-hmac or you have configured Cloudera Manager with the appropriate encryption types to align with what was created for your admin user, this should work. Let us know if you hit any trouble or have questions.
... View more
03-16-2018
08:51 PM
@Huriye, The error message is more of a generic error that is returned if something about the connection to the database fails. You might try checking the Cloudera Manager log for more information and hopefully a stack trace or SQL error.
... View more
03-16-2018
08:28 PM
2 Kudos
@ramin, I'm happy to help. The problem now is that your /etc/krb5.conf is using a cache that Java is not able to access. If you look at your klist output you see: Ticket cache: KEYRING:persistent:0:krb_ccache_KK2INr6 By default, recent linux OSes have the "keyring" type of cache defined in /etc/krb5.conf. While MIT Kerberos's kinit command recognizes the cache type, Java does not. So, when you run hdfs dfs -ls /, java cannot find any Ticket Granting Ticket and you get the error. To solve this, edit your /etc/krb5.conf and comment out the line with "default_ccache_name" in it. Add a pount sign in front of it. For example: #default_ccache_name = KEYRING:persistent:%{uid} This will allow the "kinit" command to store the credentials cache in the default /tmp directory location using the "FILE" type of cache. Java can access this as it uses the same default type.
... View more
03-16-2018
04:26 PM
Hi @ramin, When your hadoop service credentials are created, the password is randomized so you are not supposed to know what it is. If you would like, you can create a user in your KDC with a principal "hdfs@REALM" so you can create the password. Alternatively, you can kinit via an hdfs keytab like this (assuming you are on a namenode host) kinit -kt /var/run/cloudera-scm-agent/process/`ls -lrt /var/run/cloudera-scm-agent/process/ | awk '{print $9}' |grep NAMENODE| tail -1`/hdfs.keytab hdfs/hostname@REALM The keytab contains the password so you do not need to know it. That is why you need to be very careful to protect access to any keytabs you create. All that said, it is advisable to try to create users who are not "hdfs" and then either make them superusers or give them the necessary permission to do what they need to do. That way, actions they take can be reviewed via audit more readily.
... View more
03-16-2018
03:59 PM
1 Kudo
@Gabre, This error indicates that the server could not find a key to decrypt the Authentication request. This can happen if the client requests a Service Ticket with a particular encryption type that the KDC has but the HDFS NameNode's keytab does not have that same encryption type. Some things to check: - /etc/krb5.conf What encryption types do you have configured in libdefaults? - run this on your active NameNode host: # klist -kte /var/run/cloudera-scm-agent/process/`ls -lrt /var/run/cloudera-scm-agent/process/ | awk '{print $9}' |grep NAMENODE| tail -1`/hdfs.keytab Note the encryption types. The encryption types in the klist output are the only ones that can be used to decrypt. To verify what encryption type is being requested and returned in the service ticket reply, you can add some debugging to your hdfs command like this: # HADOOP_ROOT_LOGGER=TRACE,console HADOOP_JAAS_DEBUG=true HADOOP_OPTS="-Dsun.security.krb5.debug=true" hdfs dfs -ls /
... View more
03-09-2018
09:32 AM
1 Kudo
@Krish216, Do you have any other stack information or is that all that appears? The failure there is occurring when attempting to get a TGT from your KDC. The "checksum" error likely occurs after the KDC has replied to the AS_REQ (TGT request) because the reply cannot be decrypted. It could be that your krb5.conf file has encryption types listed that are not in the zookeeper keytab. Recommendations: 1. make certain that your /etc/krb5.conf on the zookeeper host contains only the encryption types that are in the zookeeper keytab 2. If (1) does not help or is not the issue, try regenerating the zookeeper credentials in Cloudera Manager to ensure that your keytab contains the same keys as the KDC for that principal.
... View more
03-08-2018
10:00 AM
@DanilaVanilla, Based on the message, you are attempting to add a new host to the cluster by using the Add New Hosts to Cluster wizard. When doing so, one of the steps is to see if the new host can resolve and communicate with the Cloudera Manager host. This is failing in some way. You can ssh to that host and look for the scm_prepare_node.log file in /tmp/scm_prepare_node directory to get more information about the failure. Likely, there is a problem resolving your Cloudera Manager hostname from the new host. You might check that and make sure you can also connect to CM. You can use the same command that the prepare script uses to test your connection: python -c 'import socket; import sys; s = socket.socket(socket.AF_INET); s.settimeout(5.0); s.connect((sys.argv[1], int(sys.argv[2]))); s.close();' <FQDN of your CM host> 7182 If there are no errors when running that command, you should be able to get by this issue. -Ben
... View more