Member since
04-22-2014
1218
Posts
341
Kudos Received
157
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 26261 | 03-03-2020 08:12 AM | |
| 16421 | 02-28-2020 10:43 AM | |
| 4725 | 12-16-2019 12:59 PM | |
| 4475 | 11-12-2019 03:28 PM | |
| 6679 | 11-01-2019 09:01 AM |
11-16-2018
09:27 AM
@VijayM, The openssl debug information indicates that the client makes a connection to a server but the server does not return a certificate. Since a direct connection to HiveServer2 does not have the problem, I conclude that your haproxy is still using termination even though your configuration snippet would indicate otherwise. Based on what you have provided it appears: 1. your connection to port 10001 is using TLS termination at the haproxy 2. the server certificate is not valid so no TLS handshake can be performed. Basically, the configuration you show cannot be the one that is being used for haproxy that is running and listening on port 10001 so perhaps it was not restarted. openssl s_client will return the following error if the port it connects to is not listening on TLS: 139972358285128:error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol:s23_clnt.c:769: Since you are seeing: SSL routines:ssl23_write:ssl handshake failure:s23_lib.c:177 That indicates there was an actual problem on the server side. The server in this case must be your haproxy. So, I think it would be good to list the full haproxy configuration file and also make sure that it really did restart since your last change. I used your config file and pass-through TLS worked perfectly to my HS2 servers. I think we must be fighting an haproxy config/restart issue since the frontend/backend you showed last worked for me. I actually copied and pasted your config and changed the hostnames only.
... View more
11-15-2018
09:56 AM
@AKB, solrctl will use the --negotiate option regardless of whether the cluster enabled. It will only be useful if kerberos is enabled in the cluster, though. The problem is that your OS's version of curl does not support the "--negotiate" option which means, as Patrick said, that you have a non-standard version of curl or curl libraries installed on that host. You can find out from where the files originate something like this: # which curl # rpm -qf `which curl` # curl -V Basically, you need to install a version of curl that leverages --negotiate on the host where you are running solrctl.
... View more
11-15-2018
08:44 AM
@VijayM, The configuration you are using is not correct as it is a mix of pass-through and termination. You can remove everything from ssl onward in line: bind *:10001 ssl crt /app/bds/security/x509/cmserver.pem so it becomes: bind *:10001 I looked back at my first post and it appears I made a mistake when pasting and forgot to remove the "ssl" part from my pass-through example. Sorry for the confusion. NOTE: If you are doing TLS termination, then being prompted for the key password is expected if you have a key file that is password protected. NOTE2: In order to get rid of that WARNING if you want to use termination, add tune.ssl.default-dh-param 2048 to the "global" section of your haproxy.cfg and restart. In ordre to debug the javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake (state=08S01,code=0) issue we really need to see why the handshake is being terminated. If the log of haproxy and both hiveserver2 servers don't show any TLS messages at the time of the failure, then the next best thing is to do a packet capture on port host where the beeline is run and also on the HiveServer2 server. Since the TLS handshake is done in the clear, a packet capture can be opened in WireShark where the handshake will be evident. For example: 1 - runon the beeline host: # tcpdump -i any -w ~/beeline.pcap port 10001 2 - run on the HiveServer2 host (shut down one so that the load balancer must choose one and you know which): # tcpdump -i any -w ~/hs2.pcap port 10000 3 - run the beeline command so that it fails 4 - Ctrl-c both tcpdumps 5 - open the pcap files in Wireshark. You may need to use "decode as..." to decode the 10001 and 10000 ports as SSL/TLS in order to see the TLS handshake. If you are unfamiliar with packet capture/wireshark, then try this: # openssl s_client -connect <load_balancer_host>:10001 -msg -debug This will have openssl client print out the handshake process via the load balancer.
... View more
11-14-2018
05:39 PM
2 Kudos
@orak, OpenLDAP is just fine for hadoop LDAP purposes. Active Directory is part of many existing IT infrastructures, so it is often used due to the way it does combine LDAP and Kerberos (along with other things). Users in your Kerberos KDC and LDAP server do not necessarily need to originate in the same object. Any true relationship between the two where the KDC principal exists in an end user object that is used for authentication would exist due to some sort of integration at the KDC / LDAP server level. This is not necessary for hadoop services to work. In general, there are 3 needs if you are going to secure your cluster with Kerberos: - Kerberos - means of mapping users to groups (usually OS shell-based, but can be LDAP based) - OS users as which services will run and end user OS users for YARN containers (running MR jobs) If I kinit as bgooley@EXAMPLE.COM and then attempt to perform a listing on a directory that is read for user/group and owned by someone else, then the NameNode must be able to determine if the user is a member of the group who has permission to list files. The principal would be trimmed to a "short name" by trimming off the realm to arrive at bgooley. The user bgooley's group membership would then be determined (shell group mapping or ldap group mapping) . See the following for details: https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/GroupsMapping.html This mapping is used by several services so it is part of core hadoop. Then, you have the OS users that must exist at the OS level so that various processes can start as those users and files be owned. Also YARN containers will store information in the OS file system as the user running the job. This means that users who run jobs need to exist on all nodes in the cluster. Some of these topics are covered in a bit more detail here: https://www.cloudera.com/documentation/enterprise/latest/topics/sg_auth_overview.html That's a lot to process, so I'll stop there and wait to see if you have any questions.
... View more
11-14-2018
12:37 PM
@VijayM, Based on your original message and your configuration, I think the HAProxy bind port is the issue. You have: bind *:443 But you are trying to connect via TLS to port 10001 Maybe try: bind *:10001 Then restart HAProxy. Hope it is that simple. If that doesn't work, let us know and we can use openssl s_client to observe the handshake to see what happens.
... View more
11-13-2018
07:43 AM
@orak, It is not clear what role the openLDAP server will fulfill. What information are you storing there and how will it be used by the hadoop cluster? OpenLDAP is an LDAP server only so you can't really add a KDC to it. Do you mean that you are using IPA perhaps? If you are storing your service principals in MIT KDC and your users exist in the MIT KDC, there is no need for cross-realm trust. Cross-realm trust is only required if your hadoop cluster's realm differs from the users' realm. For example, if you your users existed in Active Directory and authenticate to AD but you want to allow those users access to hadoop, you would need to configure one-way cross-realm trust.
... View more
11-08-2018
03:04 PM
@Timothy, I'm not sure what "rehydrate our EMR the Superuser is no longer in the system." means. Are you deleting your Hue database users from Hue itself? The is_superuser flag is associated with your user Hue user in the Hue database. Once there is an LDAP-authenticated user that is a superuser, no other users will be able to become superuser without you granting that access explicitly. If you want to clean out the Hue users from the Hue database and start over while protecting a random user from getting superuser access as the first user to log in, you could temporarily configure the search filter to only return your user. Once you have logged into Hue, change the filter back to what you want and start over. Please visit the Cloudera upgrade documentation to review what is required for upgrading when the time comes. It is a big upgrade and can require some manual processes especially if you use Solr. It will be available for download when it is released to the public
... View more
11-08-2018
09:13 AM
@Krish216, If you only created a JKS file with a private key and then imported the CA certificates, you will have a self-signed certificate. You would still need to create a CSR and have it signed by a certificate authority in order to not have it self-signed. Without seeing each command you ran, it is not possible to confirm. That said, your issue is not caused by TLS issues if you only see: Authentication failure for user: '__cloudera_internal_user__mgmt-ACTIVITYMONITOR-15d443db68f73fcfa654fd83bf04540e' from This means that the TLS handshake completed and then the client attempted to authenticate with its username and password. I would suggest making sure you have done the following after enabling TLS for the admin console and restarting Cloudera Manager with service cloudera-scm-server restart: - Make sure you have configured Truststore for Cloudera Management Service. If it is self-signed, then you can use the same JKS file you specified for the keystore in the CM config. - Restart Cloudera Management Service from the Cloudera Manager UI. The Cloudera Management Service roles must be able to connect to and authenticate to Cloudera Manager in order to start.
... View more
11-07-2018
05:41 PM
@Krish216, Glad to hear you are enabling security. Assuming that generated a CSR (certificate signing request) and it was signed by your CA (Certificate Authority) and that you imported that same signed certificate into your keystore, you should then see that the signed certificate is in your JKS file, listed by keytool as PrivateKeyEntry. If you see the "self-signed" certificate in your JKS for the PrivateKeyEntry, but you also see your server certificate (that was signed) in the JKS, that indicates that the import of the certificate did not match the Key from which the CSR was generated. If you can show some more information about what you did and what you see (screen shots or command line text would be great) then we might be able to more clearly understand what the underlying problem is.
... View more
11-07-2018
05:26 PM
@Timothy, I believe the feature you are seeking has been introduced to the codebase only in the last few months: https://issues.cloudera.org/browse/HUE-7407 This fix is likely to make it into CDH 6.1 but I don't think there are plans to add it to 5.15.x.
... View more