Member since
06-20-2016
308
Posts
103
Kudos Received
29
Solutions
01-24-2017
08:33 PM
2 Kudos
When you build a cluster and enable Kerberos , by default Ambari creates/uses default principal names. here is the steps to enable the Kerberos using Blueprint/APIs.
1. Blueprint: Create a blueprint.
API- POST - http://apappu3.hdp.com:8080/api/v1/blueprints/cbp
Payload -
{
"configurations" : [
{
"zoo.cfg" : {
"properties" : {
"dataDir" : "/data/hadoop/zookeeper",
"initLimit": "5",
"syncLimit": "2",
"maxClientCnxns": "0"
}
}
}
],
"host_groups" : [
{
"components" : [
{
"name" : "HIVE_SERVER"
},
{
"name" : "HISTORYSERVER"
},
{
"name" : "HIVE_METASTORE"
},
{
"name" : "ZOOKEEPER_CLIENT"
},
{
"name" : "WEBHCAT_SERVER"
},
{
"name" : "ZOOKEEPER_SERVER"
},
{
"name" : "APP_TIMELINE_SERVER"
},
{
"name" : "RESOURCEMANAGER"
},
{
"name" : "SLIDER"
},
{
"name" : "ATLAS_CLIENT"
},
{
"name" : "ATLAS_SERVER"
},
{
"name" : "NAMENODE"
},
{
"name" : "SECONDARY_NAMENODE"
}
,
{
"name" : "DATANODE"
},
{
"name" : "NODEMANAGER"
}
,
{
"name" : "HISTORYSERVER"
},
{
"name" : "KAFKA_BROKER"
}
,
{
"name" : "SPARK_CLIENT"
}
,
{
"name" : "SPARK_JOBHISTORYSERVER"
}
,
{
"name" : "SPARK_THRIFTSERVER"
},
{
"name" : "KERBEROS_CLIENT"
}
],
"name" : "host-group-2",
"cardinality" : "1"
}
],
"Blueprints" : {
"stack_name" : "HDP",
"stack_version" : "2.5"
}
}
2. Create the cluster with API:
API- POST - http://apappu3.hdp.com:8080/api/v1/clusters/test
Payload
{
"blueprint" : "cbp",
"config_recommendation_strategy" : "ALWAYS_APPLY",
"host_groups" :[
{
"name" : "host-group-2",
"configurations" : [
{
"hive-site" : {
"javax.jdo.option.ConnectionPassword" : "admin"
}
}
],
"hosts" : [
{
"fqdn" : "apappu3.hdp.com"
}
]
}
]
3. Now Update the kerberoes descriptor to have your own principals:
Ex: here we are updating spark principal name.
API - POST - http://apappu3.hdp.com:8080/api/v1/clusters/test/artifacts/kerberos_descriptor
Payload-
{
"artifact_data" : {
"services" : [ {
"name" : "SPARK",
"identities" : [
{
"principal" : {
"configuration" : "spark-defaults/spark.history.kerberos.principal",
"type" : "user",
"local_username" : "spark",
"value" : "spark@HDSND026.xxxx.xxx.COM"
},
"name" : "sparkuser"
}
]
}]
}
}
4. Update default kerberos configurations using
API call: PUT - http://apappu3.hdp.com:8080/api/v1/clusters/test
[
{
"Clusters": {
"desired_config": {
"type": "krb5-conf",
"tag": "version1",
"properties": {
"manage_krb5_conf" : "true",
"conf_dir" : "/etc",
"content" : "\n[libdefaults]\n renew_lifetime = 7d\n forwardable = true\n default_realm = {{realm}}\n ticket_lifetime = 24h\n dns_lookup_realm = false\n dns_lookup_kdc = false\n #default_tgs_enctypes = {{encryption_types}}\n #default_tkt_enctypes = {{encryption_types}}\n\n{% if domains %}\n[domain_realm]\n{% for domain in domains.split(',') %}\n {{domain}} = {{realm}}\n{% endfor %}\n{% endif %}\n\n[logging]\n default = FILE:/var/log/krb5kdc.log\n admin_server = FILE:/var/log/kadmind.log\n kdc = FILE:/var/log/krb5kdc.log\n\n[realms]\n {{realm}} = {\n admin_server = {{admin_server_host|default(kdc_host, True)}}\n kdc = {{admin_server_host|default(kdc_host, True)}}\n }\n\n{# Append additional realm declarations below #}",
"domains" : ".apache.com,apache.com"
}
}
}
},
{
"Clusters": {
"desired_config": {
"type": "kerberos-env",
"tag": "version1",
"properties": {
"realm" : "AMBARI.APACHE.ORG",
"kdc_type" : "mit-kdc",
"kdc_host" : "apappu3.hdp.com",
"admin_server_host" : "apappu3.hdp.com",
"encryption_types" : "aes des3-cbc-sha1 rc4 des-cbc-md5",
"ldap_url" : "",
"container_dn" : ""
}
}
}
}
]
5. Start kerberosing the cluster
PUT - http://apappu3.hdp.com:8080/api/v1/clusters/test
{
"session_attributes" : {
"kerberos_admin" : {
"principal" : "admin/admin",
"password" : "admin"
}
},
"Clusters": {
"security_type" : "KERBEROS"
}
}
6. Restart all services once step5 is completed.
API to Stop/Start services.
STOP - PUT - http://apappu3.hdp.com:8080/api/v1/clusters/test/services {"RequestInfo":{"context":"Stop Service"},"Body":{"ServiceInfo":{"state":"INSTALLED"}}}
START - PUT - http://apappu3.hdp.com:8080/api/v1/clusters/test/services {"ServiceInfo": {"state" : "STARTED"}}
Note: I have tried and tested this in Ambari-2.4.x
... View more
Labels:
01-24-2017
05:48 PM
7 Kudos
Usually service can be removed using API calls, but if the service is inconsistent state then API's does not work. so only way to delete is by running SQL queries. here is the list of steps to delete KNOX service. 1. delete from serviceconfigmapping where service_config_id in (select service_config_id from serviceconfig where service_name like '%KNOX%') 2. delete from confgroupclusterconfigmapping where config_type like '%knox%' 3. delete from clusterconfig where type_name like '%knox%' 4. delete from clusterconfigmapping where type_name like '%knox%' 5. delete from serviceconfig where service_name = 'KNOX' 6. delete from servicedesiredstate where service_name = 'KNOX' 7. delete from hostcomponentdesiredstate where service_name = 'KNOX' 8. delete from hostcomponentstate where service_name = 'KNOX' 9.delete from servicecomponentdesiredstate where service_name = 'KNOX' 10.delete from clusterservices where service_name = 'KNOX' 11. DELETE from alert_history where alert_definition_id in ( select definition_id from alert_definition where service_name = 'KNOX') 12.DELETE from alert_notice where history_id in ( select alert_id from alert_history where alert_definition_id in ( select definition_id from alert_definition where service_name = 'KNOX')) 13.DELETE from alert_definition where service_name like '%KNOX%' Note1: I have tried and tested this in Ambari 2.4.x Note2: Above queries are case sensitive - so use Upper/Lower case for service name.
... View more
Labels:
01-17-2017
11:23 PM
6 Kudos
Here is the steps to access Ambari UI through Knox. This is tried and tested with Ambari 2.4 and HDP2.5 1. Make sure Knox is configured properly and it works fine. 2. ssh to Knox gateway host and go to /var/lib/knox/data-2.5.3.0-37/services 3. download the configurations from https://github.com/apache/knox/tree/v0.11.0/gateway-service-definitions/src/main/resources/services/ambariui/2.2.0/ URL. 4. make sure your folder structure should look alike /var/lib/knox/data-2.5.3.0-37/services/ambariui/2.2.0 and should have rewrite.xml and service.xml files 5. change the owner/Group permissions to Knox for /var/lib/knox/data-2.5.3.0-37/services/ambariui/ and subdirectory 6. Go to Knox configurations Modify "Advanced topology" with below service tag <service>
<role>AMBARIUI</role>
<url>http://AMBARIHOST:8080</url>
</service> 7. Restart Knox service. 8. You should be able to access Ambari-server UI from the below URL https://Knxo-host:8443/gateway/default/ambari/ Note: replace default with your correct 'identity-assertion'
... View more
Labels:
12-14-2016
03:54 PM
Ambari REST APIs to add a component in kerberized cluster. I tried and tested these in Ambari 2.4.1 version Here is an example to install SECONDARY_NAMENODE 1. API to Add a Component curl -s -H "X-Requested-By:ambari" --user admin:admin -i -X POST -d '{"host_components":[{"HostRoles":{"component_name":"SECONDARY_NAMENODE"}}]}' http://AMBARIHOST:8080/api/v1/clusters/CLUSTERNAME/hosts?Hosts/host_name=COMPONENTHOST
2. API to install the Above added component. curl -s -H "X-Requested-By:ambari" --user admin:admin -i -X PUT -d '{"RequestInfo":{"context":"Install SECONDARY_NAMENODE","operation_level":{"level":"HOST_COMPONENT","cluster_name":"test","host_name":"COMPONENTHOST","service_name":"HDFS"}},"Body":{"HostRoles":{"state":"INSTALLED"}}}' http://AMBARIHOST:8080/api/v1/clusters/CLUSTERNAME/hosts/COMPONENTHOST/host_components/SECONDARY_NAMENODE
3. API to provide kerberos credentials. curl -s -H "X-Requested-By:ambari" --user admin:admin -i -X POST -d '{ "Credential" : { "principal" : "admin/admin", "key" : "admin", "type" : "temporary" } }' http://AMBARIHOST:8080/api/v1/clusters/CLUSTERNAME/credentials/kdc.admin.credential Note: Replace CLUSTERNAME, AMBARIHOST, COMPONENTHOST with appropriate values
... View more
Labels:
12-11-2016
06:05 AM
4 Kudos
Steps to configure ambari-server to archive log files. 1. Open /etc/ambari-server/conf/log4j.properties file, log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.File=${ambari.log.dir}/${ambari.log.file}
log4j.appender.file.MaxFileSize=80MB
log4j.appender.file.MaxBackupIndex=60
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{DATE} %5p [%t] %c{1}:%L - %m%n
to log4j.appender.file=org.apache.log4j.rolling.RollingFileAppender
log4j.appender.file.rollingPolicy=org.apache.log4j.rolling.FixedWindowRollingPolicy
log4j.appender.file.triggeringPolicy=org.apache.log4j.rolling.SizeBasedTriggeringPolicy
log4j.appender.file.File=${ambari.log.dir}/${ambari.log.file}
log4j.appender.file.triggeringPolicy.MaxFileSize=10485760
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{DATE} %5p [%t] %c{1}:%L - %m%n
log4j.appender.file.rollingPolicy.FileNamePattern=${ambari.log.dir}/${ambari.log.file}.%i.log.gz
Note: change the configurations appropriately as per your needs. 2. download apache-log4j-extras.jar from : https://logging.apache.org/log4j/extras/download.html 3. copy downloaded jar to /usr/lib/ambari-server/ path 4. Restart ambari-server Check the logs files getting archived. look out for warnings in ambari-server.out. I have used https://community.hortonworks.com/articles/50058/using-log4j-extras-how-to-rotate-as-well-as-zip-th.html as reference.
... View more
Labels:
11-28-2016
10:12 PM
@Robert Levas Yes - that is correct, I could see that server cert expiry also is 365 days set during the creation - hence most likely server cert also will get expire. ---- openssl ca -create_serial -out /var/lib/ambari-server/keys/ca.crt -days 365 -keyfile /var/lib/ambari-server/keys/ca.key -key **** -selfsign -extensions jdk7_ca -config /var/lib/ambari-server/keys/ca.config -batch -infiles /var/lib/ambari-server/keys/ca.csr
... View more
11-28-2016
09:50 PM
6 Kudos
Usually Ambari server generates certs with 1 year validity. after an year all Agent would fail to communicate with Ambari-server. Agent and Server certs would be expired. below steps can be followed to replace/resolve the expired certs. 1. stop ambari-server 2. take a back of existing /var/lib/ambari-server/keys folder and empty it. 3. download the attached keys.zip file and copy it to /var/lib/ambari-server/ , your new folder structure should be like /var/lib/ambari-server/keys/ca.config,/var/lib/ambari-server/keys/db/, - basically this is a fresh keys folder ( this is what you get when you install ambari-server ) 4. Take a back up of all the Agent certs located at /var/lib/ambari-agent/keys/ in all the hosts. 5. Delete all the files under /var/lib/ambari-agent/keys/ folder 6. restart ambari-server.
Note: ambari-server should create new certs under /var/lib/ambari-server/keys/ca.crt , /var/lib/ambari-server/keys/ca.key .... 7. restart ambari-agent
Note: ambari-agent should create new certs under /var/lib/ambari-server/keys/ folder now you should see the successful heart beat from all the Agents. Note: If Encryption is enabled on Ambari - copy back credentials.jceks, master files from the backed up keys to newly created keys folder. Note: Please note that if SSL is enabled for Ambari UI then have to re-enable SSL step again as some of the certs were not part of the keys folder. or else those files can be copied to new keys folders.
... View more
Labels:
11-28-2016
05:48 PM
5 Kudos
Note:Ranger communicates with Plug-ins only with 2 WAY SSL (1 way SSL in not allowed). [Updated] Appears like one way SSL is possible with latest patch - https://issues.apache.org/jira/browse/RANGER-1094 First get server keystore as skeystore.jks and truststore strustore.jks and client keystore as ckeystore.jks and ctruststore.jks (you can create these keystore/truststores once you get the signed certs from CA Signing. here is the steps: 1. Login to Ambari
Go to Ranger > Configs > Ranger Settings > External URL points to a URL that uses SSL: https://<hostname of Ranger>:<https port, default is 6182>
and
ranger.service.https.attrib.ssl.enabled to false
2. Go to HDFS > Configs > Advanced > ranger-hdfs-policymgr-ssl and set the following properties:
xasecure.policymgr.clientssl.keystore = /etc/hadoop/conf/ckeystore.jks
xasecure.policymgr.clientssl.keystore.password = bigdata
xasecure.policymgr.clientssl.truststore = strustore.jks
xasecure.policymgr.clientssl.truststore.password = bigdata
3. Go to HDFS > Configs > Advanced > Advanced ranger-hdfs-plugin-properties
common.name.for.certificate = specify the common name (or alias) that is specified in ckeystore.jks
4.HDFS > Configs > Advanced > Advanced ranger-hdfs-plugin-properties then select the Enable Ranger for HDFS check box.
5.Go to Ranger > Configs > Ranger Settings > Advanced ranger-admin-site
ranger.https.attrib.keystore.file=skeystore.jks
ranger.service.https.attrib.keystore.pass=bigdata
ranger.service.https.attrib.keystore.keyalias=specify alias name that is specified in skeystore.jks file
ranger.service.https.attrib.clientAuth=want
Add below under custom Ranger-admin-site
ranger.service.https.attrib.client.auth=want
ranger.service.https.attrib.keystore.file=skeystore.jks
6.Log into the Ranger Policy Manager UI as the admin user. Click the Edit button of your repository (in this case, hadoopdev) and provide the CN name of the keystore as the value for Common Name For Certificate, then save your changes.
7. This is applicable only for HDP2.5 (this is a bug 2.5 hence modifying the sh script)
Go to /usr/hdp/current/ranger-admin/ews/ranger-admin-services.sh
Edit the JAVA_OPTS to add trustore and truststorepassword
JAVA_OPTS=" ${JAVA_OPTS} -XX:MaxPermSize=256m -Xmx1024m -Xms1024m -Djavax.net.ssl.trustStore=/tmp/rangercerts/ctruststore.jks -Djavax.net.ssl.trustStorePassword=bigdata"
8. Restart all the service and you HDFS plug-in should be able to communicate with Ranger service.
Note:
while creating the client certs, make sure you provide extension as "usr_cert" and server cert as "server_cert" , other wise 2 WAY SSL communication would fail.
... View more
Labels:
11-17-2016
12:45 AM
Steps to configure 2way SSL between Ambari-server and Ambari-agent by using custom certs. Here I have used CA Signed certs at server side and Agent certs are generated dynamically. if you are planning to use CA Signed cert at Agents side then for every Agent install you may have to copy the certs and do manual work. 1. Make sure to have fresh keys folder. (if you do not have one, you can copy the folder from one of the fresh install machine or do following).
- Delete all the crt and csr files that starts with hostname at /var/lib/ambari-server/keys.
- Empty /var/lib/ambari-server/keys/db/index.txt file
- Delete any certs under /var/lib/ambari-server/keys/db/newcerts/
2. - Copy your own Signed Certificate, key files /var/lib/ambari-server/keys/
Ex: certificate name is - ca-cust.crt, ca-cust.key
3. Create PKCS keystore file from your cert and key files.
Ex:openssl pkcs12 -export -inkey /tmpr/keys/ca.key -in ca-cust.crt -out /tmp/keys/keystore.p12
-password pass:bigdata -passin pass:bigdata
Note: replace passwords with appropriate
4. Create pass-cust.txt with appropriate password that is been provided in step3 for keystore.
Ex: echo "bigdata" > pass-cust.txt
5. Configure your ambari.properties with appropriate cert, keys, keystore file names.
security.server.cert_name=ca-cust.crt
security.server.key_name=ca-cust.key
security.server.keystore_name=keystore-cust.p12
security.server.truststore_name=keystore-cust.p12
security.server.crt_pass_file=pass-cust.txt
security.server.two_way_ssl=true
6. remove any existing certs in all the Agent hosts at /var/lib/ambari-agent/keys/
7. start ambari-server and ambari-agent logs Note1: look out for SSL errors in ambari-server logs during startup. this is tried in Ambari2.4.x Have tried with 2.6.x and it works fine too. Note2: Currently there is a BUG https://issues.apache.org/jira/browse/AMBARI-23920 in the product - please follow the workaround mentioned.
... View more
Labels:
08-30-2016
10:01 PM
1 Kudo
@Michael Dennis "MD" Uanang @Guilherme Braccialli There is a bug in Ambari - it is trying to read the HBase JMS properties in http rather than using HTTPS. i don't see any issue at the HBase side. those errors are when Ambari try to connect using HTTP it complains that some one is connecting using http. I am yet to raise JIRA/bug for this in Ambari - i will create one today.
... View more
- « Previous
- Next »