Community Articles
Find and share helpful community-sourced technical articles
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.
Labels (2)

Here is complete steps to enable HTTPS for web HDFS.

Step1 .

First get the keystore to use in HDFS configurations.

Follow below steps in case cert is getting signed by CA

1. Generate a JKS
keytool -genkey -keyalg RSA -alias c6401 -keystore /tmp/keystore.jks -storepass bigdata -validity 360 -keysize 2048

2. Generate CSR from above keystore
keytool -certreq -alias c6401 -keyalg RSA -file /tmp/c6401.csr -keystore /tmp/keystore.jks -storepass bigdata

3. Now get the singed cert from CA - file name is /tmp/c6401.crt

4. Import the root cert to JKS first. (Ignore if it already present)
keytool -import -alias root -file /tmp/ca.crt -keystore /tmp/keystore.jks
Note: here ca.crt is root cert

5. Repeat step4 for intermediate cert if there is any.

6. Import signed cert into JKS
keytool -import -alias c6401 -file /tmp/c6401.crt -keystore /tmp/keystore.jks -storepass bigdata

7. Import root to trust store (Here it creates new truststore.jks )
 keytool -import -alias root -file /tmp/ca.crt -keystore /tmp/truststore.jks -storepass bigdata

8. Import intermediate cert (if there is any) to trust store (similar to step 7)

If it is self signed cert

1. Generate a JKS
keytool -genkey -keyalg RSA -alias c6401 -keystore /tmp/keystore.jks -storepass bigdata -validity 360 -keysize 2048

Note: Use keystore.jks for Truststore configurations as well.

Follow step1 for every master component/host.

Step2:

Login to Ambari and configure/add below properties in core-site.xml

hadoop.ssl.require.client.cert=false
hadoop.ssl.hostname.verifier=DEFAULT
hadoop.ssl.keystores.factory.class=org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory
hadoop.ssl.server.conf=ssl-server.xml
hadoop.ssl.client.conf=ssl-client.xml

Step3:

Set the following properties (or add the properties if required) in hdfs-site.xml:

dfs.http.policy=HTTPS_ONLY
dfs.client.https.need-auth=false
dfs.datanode.https.address=0.0.0.0:50475
dfs.namenode.https-address=NN:50470

Note: you can also set dfs.http.policy=HTTP_AND_HTTPS

Step4:

Update below configurations under "Advanced ssl-server" (ssl-server.xml)

ssl.server.truststore.location=/tmp/truststore.jks
ssl.server.truststore.password=bigdata
ssl.server.truststore.type=jks
ssl.server.keystore.location=/tmp/keystore.jks
ssl.server.keystore.password=bigdata
ssl.server.keystore.keypassword=bigdata
ssl.server.keystore.type=jks

Note: create separate keystore file for each NAMENODE host with the file as as keystore.jks and have it under /tmp/

Step5:

Update below configurations under "Advanced ssl-client" (ssl-client.xml)

ssl.client.truststore.location=/tmp/truststore.jks
ssl.client.truststore.password=bigdata
ssl.client.truststore.type=jks

ssl.client.keystore.location=/tmp/keystore.jks

ssl.client.keystore.password=bigdata

ssl.client.keystore.keypassword=bigdata

ssl.client.keystore.type=jks

Steps6:

Re-start HDFS service

Step7:

Make sure you import the CA root to Ambari-server by running "ambari-server setup-security"

Step8:

You should be able to access UI in https mode on 50470 port.

Note: When you enable the HTTPS for HDFS, Journal node and NN starts in HTTPS mode, check for journal node and name node logs for any errors. copy keystore.jks files for all Namenodes and Journal nodes and Truststore files to all the HDFS nodes.

More articles

*. To enable HTTPS for MAPREDUCE2 and YARN - https://community.hortonworks.com/articles/52876/enable-https-for-yarn-and-mapreduce2.html

*. To enable HTTPS for HBASE - https://community.hortonworks.com/articles/51165/enable-httpsssl-for-hbase-master-ui.html

8,879 Views
Comments
New Contributor

How to create keystore file, in the above procedure below NOTE has we been specified as to create seperate keystore file for each node, can you kindly provide the steps to create the keystore.

Note: create separate keystore file for each NAMENODE host with the file as as keystore.jks and have it under /tmp/

New Contributor

How to create keystore file, In step 4 NOTE, specified to create keystore for each name node, can you kindly provide steps to create the keystore file.

Note: create separate keystore file for each NAMENODE host with the file as as keystore.jks and have it under /tmp/

New Contributor

Hi pappu , I have followed the same steps you mentioned for enabling HTTPS(SSL) on my Sandbox HDFS. I am pretty sure every thing is matched to your instructions. But I am not able to access my sandbox hdfs on https (port:50470). I am using self signed cert for keystore. Can you kindly help me out for this ? Answer few questions : 1) Do the case with sandbox is different as compared to real time cluster ? 2) Does there any thing need to be done in host system (OS) ? 3) Does any user permissions need to be set on the certificates folder and files ? I have set 700 for folder and 600 for keystore file. I have set hdfs:hadoop as user:group. Help will be appreciated. Thanks

New Contributor

Hi pappu ,

I have followed the same steps you mentioned for enabling HTTPS(SSL) on my Sandbox HDFS. I am pretty sure every thing is matched to your instructions. But I am not able to access my sandbox hdfs on https (port:50470). I am using self signed cert for keystore. Can you kindly help me out for this ?

Answer few questions :

1) Does the case with sandbox is different as compared to real time cluster ? 2) Does there any thing need to be done in host system (OS) ? 3) Does any user permissions need to be set on the certificates folder and files? I have folder owner:group as hdfs:hadoop , given permission 700 to folder and 600 to files.

After setting up all cluster service remains up. But https access does not get success.

Help will be appreciated.

Thanks

@Syed Jawad Gilani

Do you still have this issues? sorry - i did not check your questions earlier.

New Contributor

Its pleasure to receive your reply. I have applied the SSL on all the service mentioned in your posts on my cluster. Now just receiving few service related issues after enabling SSL. One of them is, we are not able to run a Hive map reduce job after enabling SSL. I am currently investigating below mentioned issue in which we are not able to execute MR jobs. After enabling SSL, they are getting error javax.net.ssl.SSLException: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty. Can you give me some solution for this ?

@Syed Jawad Gilani

it is difficult to tell the reason by seeing above error message. but are you generating keystore files from the latest Java and trying to use in older java? please check that.

Not applicable

This was a ton of help towards getting HTTPS hookup for me. I can connect to ports 50475, 8090, and 8044 but still having trouble with the datanode port 50470. Any thoughts?

New Contributor

Hi all,

I followed this guide to enable SSL for HDFS as root user on my NameNode server. Then I copied the keystore and the truststore on my DataNode servers (as suggested here at step 4) because when I tried to start HDFS service I received a FileNotFoundException on /tmp/keystore.jks. After coping files on DataNode servers I tried to start HDFS service but starting DataNode servers I receive an "Permission Denied" error on the same file: here at step 6 it suggests to change permissions on the server key location, but when I execute

chgrp -R yarn:hadoop /tmp/

I receive following error

chgrp: invalid group: ‘yarn:hadoop’

any suggestions for me?

Thanks in advance!

New Contributor

@amarnath reddy pappu I've followed all the steps and enabled ssl with self signed cert. Can you please confirm if the below webhdfs url is right way to access ? Am not getting any response when I use it.

https://hostname:50470/webhdfs/v1/test/testdata.txt?user.name=root&op=OPEN

New Contributor

What will be the steps if the cluster has 2 namenodes (active and standby) with 3 journal nodes?

Don't have an account?
Coming from Hortonworks? Activate your account here
Version history
Revision #:
1 of 1
Last update:
‎08-22-2016 11:03 PM
Updated by:
 
Contributors
Top Kudoed Authors