Member since
06-20-2016
308
Posts
103
Kudos Received
29
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1956 | 09-19-2018 06:31 PM | |
1442 | 09-13-2018 09:33 PM | |
1409 | 09-04-2018 05:29 PM | |
4414 | 08-27-2018 04:33 PM | |
3484 | 08-22-2018 07:46 PM |
12-11-2016
06:05 AM
4 Kudos
Steps to configure ambari-server to archive log files. 1. Open /etc/ambari-server/conf/log4j.properties file, log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.File=${ambari.log.dir}/${ambari.log.file}
log4j.appender.file.MaxFileSize=80MB
log4j.appender.file.MaxBackupIndex=60
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{DATE} %5p [%t] %c{1}:%L - %m%n
to log4j.appender.file=org.apache.log4j.rolling.RollingFileAppender
log4j.appender.file.rollingPolicy=org.apache.log4j.rolling.FixedWindowRollingPolicy
log4j.appender.file.triggeringPolicy=org.apache.log4j.rolling.SizeBasedTriggeringPolicy
log4j.appender.file.File=${ambari.log.dir}/${ambari.log.file}
log4j.appender.file.triggeringPolicy.MaxFileSize=10485760
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{DATE} %5p [%t] %c{1}:%L - %m%n
log4j.appender.file.rollingPolicy.FileNamePattern=${ambari.log.dir}/${ambari.log.file}.%i.log.gz
Note: change the configurations appropriately as per your needs. 2. download apache-log4j-extras.jar from : https://logging.apache.org/log4j/extras/download.html 3. copy downloaded jar to /usr/lib/ambari-server/ path 4. Restart ambari-server Check the logs files getting archived. look out for warnings in ambari-server.out. I have used https://community.hortonworks.com/articles/50058/using-log4j-extras-how-to-rotate-as-well-as-zip-th.html as reference.
... View more
Labels:
11-28-2016
10:12 PM
@Robert Levas Yes - that is correct, I could see that server cert expiry also is 365 days set during the creation - hence most likely server cert also will get expire. ---- openssl ca -create_serial -out /var/lib/ambari-server/keys/ca.crt -days 365 -keyfile /var/lib/ambari-server/keys/ca.key -key **** -selfsign -extensions jdk7_ca -config /var/lib/ambari-server/keys/ca.config -batch -infiles /var/lib/ambari-server/keys/ca.csr
... View more
11-28-2016
09:50 PM
6 Kudos
Usually Ambari server generates certs with 1 year validity. after an year all Agent would fail to communicate with Ambari-server. Agent and Server certs would be expired. below steps can be followed to replace/resolve the expired certs. 1. stop ambari-server 2. take a back of existing /var/lib/ambari-server/keys folder and empty it. 3. download the attached keys.zip file and copy it to /var/lib/ambari-server/ , your new folder structure should be like /var/lib/ambari-server/keys/ca.config,/var/lib/ambari-server/keys/db/, - basically this is a fresh keys folder ( this is what you get when you install ambari-server ) 4. Take a back up of all the Agent certs located at /var/lib/ambari-agent/keys/ in all the hosts. 5. Delete all the files under /var/lib/ambari-agent/keys/ folder 6. restart ambari-server.
Note: ambari-server should create new certs under /var/lib/ambari-server/keys/ca.crt , /var/lib/ambari-server/keys/ca.key .... 7. restart ambari-agent
Note: ambari-agent should create new certs under /var/lib/ambari-server/keys/ folder now you should see the successful heart beat from all the Agents. Note: If Encryption is enabled on Ambari - copy back credentials.jceks, master files from the backed up keys to newly created keys folder. Note: Please note that if SSL is enabled for Ambari UI then have to re-enable SSL step again as some of the certs were not part of the keys folder. or else those files can be copied to new keys folders.
... View more
Labels:
11-28-2016
05:48 PM
5 Kudos
Note:Ranger communicates with Plug-ins only with 2 WAY SSL (1 way SSL in not allowed). [Updated] Appears like one way SSL is possible with latest patch - https://issues.apache.org/jira/browse/RANGER-1094 First get server keystore as skeystore.jks and truststore strustore.jks and client keystore as ckeystore.jks and ctruststore.jks (you can create these keystore/truststores once you get the signed certs from CA Signing. here is the steps: 1. Login to Ambari
Go to Ranger > Configs > Ranger Settings > External URL points to a URL that uses SSL: https://<hostname of Ranger>:<https port, default is 6182>
and
ranger.service.https.attrib.ssl.enabled to false
2. Go to HDFS > Configs > Advanced > ranger-hdfs-policymgr-ssl and set the following properties:
xasecure.policymgr.clientssl.keystore = /etc/hadoop/conf/ckeystore.jks
xasecure.policymgr.clientssl.keystore.password = bigdata
xasecure.policymgr.clientssl.truststore = strustore.jks
xasecure.policymgr.clientssl.truststore.password = bigdata
3. Go to HDFS > Configs > Advanced > Advanced ranger-hdfs-plugin-properties
common.name.for.certificate = specify the common name (or alias) that is specified in ckeystore.jks
4.HDFS > Configs > Advanced > Advanced ranger-hdfs-plugin-properties then select the Enable Ranger for HDFS check box.
5.Go to Ranger > Configs > Ranger Settings > Advanced ranger-admin-site
ranger.https.attrib.keystore.file=skeystore.jks
ranger.service.https.attrib.keystore.pass=bigdata
ranger.service.https.attrib.keystore.keyalias=specify alias name that is specified in skeystore.jks file
ranger.service.https.attrib.clientAuth=want
Add below under custom Ranger-admin-site
ranger.service.https.attrib.client.auth=want
ranger.service.https.attrib.keystore.file=skeystore.jks
6.Log into the Ranger Policy Manager UI as the admin user. Click the Edit button of your repository (in this case, hadoopdev) and provide the CN name of the keystore as the value for Common Name For Certificate, then save your changes.
7. This is applicable only for HDP2.5 (this is a bug 2.5 hence modifying the sh script)
Go to /usr/hdp/current/ranger-admin/ews/ranger-admin-services.sh
Edit the JAVA_OPTS to add trustore and truststorepassword
JAVA_OPTS=" ${JAVA_OPTS} -XX:MaxPermSize=256m -Xmx1024m -Xms1024m -Djavax.net.ssl.trustStore=/tmp/rangercerts/ctruststore.jks -Djavax.net.ssl.trustStorePassword=bigdata"
8. Restart all the service and you HDFS plug-in should be able to communicate with Ranger service.
Note:
while creating the client certs, make sure you provide extension as "usr_cert" and server cert as "server_cert" , other wise 2 WAY SSL communication would fail.
... View more
Labels:
11-17-2016
12:45 AM
Steps to configure 2way SSL between Ambari-server and Ambari-agent by using custom certs. Here I have used CA Signed certs at server side and Agent certs are generated dynamically. if you are planning to use CA Signed cert at Agents side then for every Agent install you may have to copy the certs and do manual work. 1. Make sure to have fresh keys folder. (if you do not have one, you can copy the folder from one of the fresh install machine or do following).
- Delete all the crt and csr files that starts with hostname at /var/lib/ambari-server/keys.
- Empty /var/lib/ambari-server/keys/db/index.txt file
- Delete any certs under /var/lib/ambari-server/keys/db/newcerts/
2. - Copy your own Signed Certificate, key files /var/lib/ambari-server/keys/
Ex: certificate name is - ca-cust.crt, ca-cust.key
3. Create PKCS keystore file from your cert and key files.
Ex:openssl pkcs12 -export -inkey /tmpr/keys/ca.key -in ca-cust.crt -out /tmp/keys/keystore.p12
-password pass:bigdata -passin pass:bigdata
Note: replace passwords with appropriate
4. Create pass-cust.txt with appropriate password that is been provided in step3 for keystore.
Ex: echo "bigdata" > pass-cust.txt
5. Configure your ambari.properties with appropriate cert, keys, keystore file names.
security.server.cert_name=ca-cust.crt
security.server.key_name=ca-cust.key
security.server.keystore_name=keystore-cust.p12
security.server.truststore_name=keystore-cust.p12
security.server.crt_pass_file=pass-cust.txt
security.server.two_way_ssl=true
6. remove any existing certs in all the Agent hosts at /var/lib/ambari-agent/keys/
7. start ambari-server and ambari-agent logs Note1: look out for SSL errors in ambari-server logs during startup. this is tried in Ambari2.4.x Have tried with 2.6.x and it works fine too. Note2: Currently there is a BUG https://issues.apache.org/jira/browse/AMBARI-23920 in the product - please follow the workaround mentioned.
... View more
Labels:
09-08-2016
06:26 PM
@mkataria Is it self signed cert? did you try adding the host into trusted sites in the browser? You can cross check the cert creation process with below article, https://community.hortonworks.com/articles/50405/how-to-enable-https-for-apache-ambari-using-jks.html
... View more
08-30-2016
10:01 PM
1 Kudo
@Michael Dennis "MD" Uanang @Guilherme Braccialli There is a bug in Ambari - it is trying to read the HBase JMS properties in http rather than using HTTPS. i don't see any issue at the HBase side. those errors are when Ambari try to connect using HTTP it complains that some one is connecting using http. I am yet to raise JIRA/bug for this in Ambari - i will create one today.
... View more
08-22-2016
11:22 PM
6 Kudos
1. As a first step, enable HTTPS for HDFS, you can follow the article https://community.hortonworks.com/articles/52875/enable-https-for-hdfs.html 2. Add/Update below configurations in "Custom mapred-site" mapred-site.xml mapreduce.jobhistory.http.policy=HTTPS_ONLY
mapreduce.jobhistory.webapp.https.address=<JHS>:<JHS_HTTPS_PORT>
mapreduce.ssl.enabled=true
mapreduce.shuffle.ssl.enabled=true
Ex: mapreduce.jobhistory.webapp.https.address=apappu-hdp234-2.openstacklocal:19889
3. Add/update below configurations under "Advanced yarn-site" ( yarn-site.xml) yarn.http.policy=HTTPS_ONLY
yarn.log.server.url=https://JHS:JHS_HTTPS_PORT/jobhistory/logs
yarn.resourcemanager.webapp.https.address=RM:RM_HTTPS_PORT
yarn.nodemanager.webapp.https.address=0.0.0.0:NM_HTTPS_PORT
Ex:
yarn.log.server.url=https://apappu-hdp234-2.ambari.org:19889/jobhistory/logs
yarn.resourcemanager.webapp.https.address=apappu-hdp234-2.ambari.org:8090
yarn.nodemanager.webapp.https.address=0.0.0.0:8042
4. Add/update below property in hdfs-site (hdfs-site.xml) under HDFS service. dfs.https.enable=true
5. Restart HDFS, YARN, MAPREDUCE services 6. should be able to access the URLs now, YARN: https://HT_HOST:19889/ MAPREDUCE:https://YARN-RM-HOST:19889/jobhistory More articles *. To enable HTTPS for HBASE - https://community.hortonworks.com/articles/51165/enable-httpsssl-for-hbase-master-ui.html
... View more
Labels:
08-22-2016
11:03 PM
6 Kudos
To enable HTTPS for web HDFS, do the following: Step 1: Get the keystore to use in HDFS configurations. a) In case cert is getting signed by CA, do the following: 1. Generate a keystore for each host. Make sure the common name portion of the certificate matches the hostname where the certificate will be deployed.
keytool -genkey -keyalg RSA -alias c6401 -keystore /tmp/keystore.jks -storepass bigdata -validity 360 -keysize 2048
2. Generate CSR from above keystore
keytool -certreq -alias c6401 -keyalg RSA -file /tmp/c6401.csr -keystore /tmp/keystore.jks -storepass bigdata
3. Now get the singed cert from CA - file name is /tmp/c6401.crt
4. Import the root cert to JKS first. (Ignore if it already present)
keytool -import -alias root -file /tmp/ca.crt -keystore /tmp/keystore.jks
Note: here ca.crt is root cert
5. Repeat step4 for intermediate cert if there is any.
6. Import signed cert into JKS
keytool -import -alias c6401 -file /tmp/c6401.crt -keystore /tmp/keystore.jks -storepass bigdata
7. Import root cert to trust store (Here it creates new truststore.jks )
keytool -import -alias root -file /tmp/ca.crt -keystore /tmp/truststore.jks -storepass bigdata
8. Import intermediate cert (if there is any) to trust store (similar to step 7)
OR, b) Do the following steps in case you are planning to use self-signed cert. 1. Generate a keystore for each host. Make sure the common name portion of the certificate matches the hostname where the certificate will be deployed.
# keytool -genkey -keyalg RSA -alias c6401 -keystore /tmp/keystore.jks -storepass bigdata -validity 360 -keysize 2048 2. Generate truststore Note: Truststore must contains certificate of all servers, you can use below commands to export cert from keystore and then import it to truststore # keytool -export -file /tmp/c6401.crt -keystore /tmp/truststore.jks -storepass bigdata -alias c6401 -rfc # keytool -import -alias c6401 -file /tmp/c6401.crt -keystore /tmp/truststore.jks -storepass bigdata Step 2: Import truststore certificates to java truststore (cacerts or jssecacerts) keytool -importkeystore \
-srckeystore /tmp/truststore.jks \
-destkeystore /usr/java/default/jre/lib/security/cacerts \
-deststorepass changeit \
-srcstorepass bigdata Step 3: Login to Ambari and configure/ add following properties in core-site.xml. hadoop.ssl.require.client.cert=false
hadoop.ssl.hostname.verifier=DEFAULT
hadoop.ssl.keystores.factory.class=org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory
hadoop.ssl.server.conf=ssl-server.xml
hadoop.ssl.client.conf=ssl-client.xml Step 4: Add/ modify following properties in hdfs-site.xml: For non-HA cluster: dfs.http.policy=HTTPS_ONLY
dfs.client.https.need-auth=false
dfs.datanode.https.address=0.0.0.0:50475
dfs.namenode.https-address=NN:50470
dfs.namenode.secondary.https-address=c6401-node3.coelab.cloudera.com:50091
Note: you can also set dfs.http.policy=HTTP_AND_HTTPS For HA-enabled clusters: dfs.http.policy=HTTPS_ONLY
dfs.client.https.need-auth=false
dfs.datanode.https.address=0.0.0.0:50475
dfs.namenode.https-address.<nameservie>.nn1= c6401-node2.coelab.cloudera.com:50470
dfs.namenode.https-address.<nameservie>.nn2= c6401-node3.coelab.cloudera.com:50470
dfs.journalnode.https-address=0.0.0.0:8481 Step 5: Update the following configurations under Advanced ssl-server (ssl-server.xml) ssl.server.truststore.location=/tmp/truststore.jks
ssl.server.truststore.password=bigdata
ssl.server.truststore.type=jks
ssl.server.keystore.location=/tmp/keystore.jks
ssl.server.keystore.password=bigdata
ssl.server.keystore.keypassword=bigdata
ssl.server.keystore.type=jks Step 6: Update the following configurations under Advanced ssl-client (ssl-client.xml) ssl.client.truststore.location=/tmp/truststore.jks
ssl.client.truststore.password=bigdata
ssl.client.truststore.type=jks ssl.client.keystore.location=/tmp/keystore.jks ssl.client.keystore.password=bigdata ssl.client.keystore.keypassword=bigdata ssl.client.keystore.type=jks Step 7: Restart HDFS service Step 8: Import the CA root (and Intermediate, if any) to ambari-server truststore by running: ambari-server setup-security For self-signed certs, make sure you import namenode(s) certificates to ambari-server truststore Refer to Steps to set up Truststore for Ambari Server for more details. Step 9: Open namenode web UI in https mode on 50470 port Tips: When you enable the HTTPS for HDFS, Journal node and NN starts in HTTPS mode; check for journal node and namenode logs for any errors. You can skip the step to create truststore.jks file and make use to java truststore instead. However, ensure you import certs (all required certs) to java truststore. More articles Enable HTTPS for MapReduce2 and YARN Enable HTTPS for HBase
... View more
Labels:
08-12-2016
01:09 AM
4 Kudos
Here is the steps to enable HTTPS for HBASE Master UI. 1. Add following properties in "Custom hbase-site" hbase.ssl.enabled=true
hbase.http.policy=HTTPS_ONLY
hadoop.ssl.enabled=true 2. In HBASE, there is no direct option to add/configure the keystore files (JKS), it uses HADOOP configurations files.
Lets say If HTTPS is already enabled for HDFS Web UI then your host would have already configured with JKS files in ssl-server.xml
and ssl-client.xml files, so HBASE also uses same ssl-server.xml files. You can find more details to configure HTTPS for HDFS at https://community.hortonworks.com/articles/52875/enable-https-for-hdfs.html 3. If you are planning to have different JKS files for HDFS and HBASE then you can copy the
the ssl-server.xml to /etc/hbase/conf path and configure the JKS file. Restart HBASE server and server should come fine and should be accessable https://HOSTNAME:16010/ Note:Please note that there is a bug in AMBARI because of that HBASE quick always try to open
with HTTP,you may have to change the protocol and then access.
... View more
Labels:
- « Previous
- Next »