Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Issue with NameNode startup after enabling SSL

avatar
New Contributor

I'm setting up a 4 node HDP 2.5 cluster with a requirement to encrypt all data in transit. I've been following the documentation from here: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.0/bk_security/content/ch_hdp-security-guide-w...

I am using a certificate signed by my company's Issuing CA. The following is in my server.keystore.jks (sensitive bits masked)

<server's FQDN>, Feb 10, 2017, PrivateKeyEntry, 
Certificate fingerprint (SHA1): B6:DA:29:57:27:10:D3:97:8D:CD:49:6C:87:82:9F:64:DD:XX:XX:XX 
<company> issuing ca, Feb 10, 2017, trustedCertEntry, 
Certificate fingerprint (SHA1): F7:20:77:9E:08:4F:20:2E:E6:8C:78:5D:EA:39:91:6F:D7:XX:XX:XX 
<company> root ca, Feb 10, 2017, trustedCertEntry, 
Certificate fingerprint (SHA1): 8D:4A:EA:A6:43:71:83:FE:44:FA:E5:04:D7:E3:5B:3A:45:XX:XX:XX

After configuring the system to use the keys and restarting the HDFS service in Ambari, I can get the DataNode to start up. When the NameNode starts up, the service comes up but then it does a check using curl. The following error shows up in the error log:

resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w '%{http_code}' -X GET -k 'https://<server's FQDN>:50470/webhdfs/v1/tmp?op=GETFILESTATUS&user.name=hdfs' 1>/tmp/tmpiWgx4l 2>/tmp/tmpHLEISr' returned 35.
curl: (35) NSS: client certificate not found (nickname not specified)
000

I get the same result if I try to run the same command on the command line. In addition, if I try to access the same URL from Chrome, I'm getting ERR_BAD_SSL_CLIENT_AUTH_CERT back. I don't have a lot of experience setting up SSL/TLS and am pretty much stuck at this point. Any help would be greatly appreciated.

Thanks,

Mike

1 ACCEPTED SOLUTION

avatar

@Michael Locatelli

You will have to disable the client auth at HDFS side.

set hadoop.ssl.require.client.cert=false and restart the services.

You can follow the article that I have published some time back at https://community.hortonworks.com/articles/52875/enable-https-for-hdfs.html

View solution in original post

5 REPLIES 5

avatar

@Michael Locatelli

You will have to disable the client auth at HDFS side.

set hadoop.ssl.require.client.cert=false and restart the services.

You can follow the article that I have published some time back at https://community.hortonworks.com/articles/52875/enable-https-for-hdfs.html

avatar
New Contributor

hadoop.ssl.require.client.cert is already set to false in core-site.xml is already set to false

avatar

@Michael Locatelli

In keystore files have only corresponding cert - remove truststore certs and other stuff, since we are not defining any alias name in the configurations.

Ex: Correct keystore file should look alike if i list it like,

keytool -list -keystore skeystore.jks

-------------

Keystore type: JKS Keystore provider: SUN Your keystore contains 1 entry apappu.hdp.com, Nov 16, 2016, PrivateKeyEntry, Certificate fingerprint (SHA1): 50:2B:EF:1F:58:07:C3:0A:C6:29:B8:49:7B:98:1B:DD:A0:A8:33:A9

-------------

If you observe in the outout - there is no trustedCertEntry.

avatar
New Contributor

@apappu Going through your article you linked, I found the issue. dfs.client.https.need-auth was set to true in hdfs-site.xml. The error cleared after that and NameNode came up with a valid SSL certificate! You just made my Friday 🙂

avatar

@Michael Locatelli

thats good to hear , feel free to add me for any SSL issues.