Created on 01-24-2017 08:33 PM - edited on 02-18-2020 11:57 PM by VidyaSargur
When you build a cluster and enable Kerberos , by default Ambari creates/uses default principal names. here is the steps to enable the Kerberos using Blueprint/APIs.
1. Blueprint: Create a blueprint.
API- POST - http://apappu3.hdp.com:8080/api/v1/blueprints/cbp
Payload -
{ "configurations" : [ { "zoo.cfg" : { "properties" : { "dataDir" : "/data/hadoop/zookeeper", "initLimit": "5", "syncLimit": "2", "maxClientCnxns": "0" } } } ], "host_groups" : [ { "components" : [ { "name" : "HIVE_SERVER" }, { "name" : "HISTORYSERVER" }, { "name" : "HIVE_METASTORE" }, { "name" : "ZOOKEEPER_CLIENT" }, { "name" : "WEBHCAT_SERVER" }, { "name" : "ZOOKEEPER_SERVER" }, { "name" : "APP_TIMELINE_SERVER" }, { "name" : "RESOURCEMANAGER" }, { "name" : "SLIDER" }, { "name" : "ATLAS_CLIENT" }, { "name" : "ATLAS_SERVER" }, { "name" : "NAMENODE" }, { "name" : "SECONDARY_NAMENODE" } , { "name" : "DATANODE" }, { "name" : "NODEMANAGER" } , { "name" : "HISTORYSERVER" }, { "name" : "KAFKA_BROKER" } , { "name" : "SPARK_CLIENT" } , { "name" : "SPARK_JOBHISTORYSERVER" } , { "name" : "SPARK_THRIFTSERVER" }, { "name" : "KERBEROS_CLIENT" } ], "name" : "host-group-2", "cardinality" : "1" } ], "Blueprints" : { "stack_name" : "HDP", "stack_version" : "2.5" } }
2. Create the cluster with API:
API- POST - http://apappu3.hdp.com:8080/api/v1/clusters/test
Payload
{ "blueprint" : "cbp", "config_recommendation_strategy" : "ALWAYS_APPLY", "host_groups" :[ { "name" : "host-group-2", "configurations" : [ { "hive-site" : { "javax.jdo.option.ConnectionPassword" : "admin" } } ], "hosts" : [ { "fqdn" : "apappu3.hdp.com" } ] } ]
3. Now Update the kerberoes descriptor to have your own principals:
Ex: here we are updating spark principal name.
API - POST - http://apappu3.hdp.com:8080/api/v1/clusters/test/artifacts/kerberos_descriptor
Payload-
{ "artifact_data" : { "services" : [ { "name" : "SPARK", "identities" : [ { "principal" : { "configuration" : "spark-defaults/spark.history.kerberos.principal", "type" : "user", "local_username" : "spark", "value" : "spark@HDSND026.xxxx.xxx.COM" }, "name" : "sparkuser" } ] }] } }
4. Update default kerberos configurations using
API call: PUT - http://apappu3.hdp.com:8080/api/v1/clusters/test
[ { "Clusters": { "desired_config": { "type": "krb5-conf", "tag": "version1", "properties": { "manage_krb5_conf" : "true", "conf_dir" : "/etc", "content" : "\n[libdefaults]\n renew_lifetime = 7d\n forwardable = true\n default_realm = {{realm}}\n ticket_lifetime = 24h\n dns_lookup_realm = false\n dns_lookup_kdc = false\n #default_tgs_enctypes = {{encryption_types}}\n #default_tkt_enctypes = {{encryption_types}}\n\n{% if domains %}\n[domain_realm]\n{% for domain in domains.split(',') %}\n {{domain}} = {{realm}}\n{% endfor %}\n{% endif %}\n\n[logging]\n default = FILE:/var/log/krb5kdc.log\n admin_server = FILE:/var/log/kadmind.log\n kdc = FILE:/var/log/krb5kdc.log\n\n[realms]\n {{realm}} = {\n admin_server = {{admin_server_host|default(kdc_host, True)}}\n kdc = {{admin_server_host|default(kdc_host, True)}}\n }\n\n{# Append additional realm declarations below #}", "domains" : ".apache.com,apache.com" } } } }, { "Clusters": { "desired_config": { "type": "kerberos-env", "tag": "version1", "properties": { "realm" : "AMBARI.APACHE.ORG", "kdc_type" : "mit-kdc", "kdc_host" : "apappu3.hdp.com", "admin_server_host" : "apappu3.hdp.com", "encryption_types" : "aes des3-cbc-sha1 rc4 des-cbc-md5", "ldap_url" : "", "container_dn" : "" } } } } ]
5. Start kerberosing the cluster
PUT - http://apappu3.hdp.com:8080/api/v1/clusters/test
{ "session_attributes" : { "kerberos_admin" : { "principal" : "admin/admin", "password" : "admin" } }, "Clusters": { "security_type" : "KERBEROS" } }
6. Restart all services once step5 is completed.
API to Stop/Start services.
STOP - PUT - http://apappu3.hdp.com:8080/api/v1/clusters/test/services {"RequestInfo":{"context":"Stop Service"},"Body":{"ServiceInfo":{"state":"INSTALLED"}}}
START - PUT - http://apappu3.hdp.com:8080/api/v1/clusters/test/services {"ServiceInfo": {"state" : "STARTED"}}
Note: I have tried and tested this in Ambari-2.4.x
Created on 01-24-2017 08:48 PM
This is one way to do it. However, you can create a cluster with Kerberos enabled using Blueprints as described in Automate HDP installation using Ambari Blueprints – Part 5. The method in that other article eliminates the need to manually perform the Kerberos-specific tasks. If you wanted to use custom principal names, you would set the Kerberos descriptor (or simply just the changes to it) in the Cluster Creation Template under the "security" object with the name of "kerberos_descriptor".
For example:
{ "blueprint": "my_blueprint", "default_password": "hadoop", "host_groups": [ -- Host Group Details --- ], "credentials": [ { "alias": "kdc.admin.credential", "principal": "admin/admin", "key": "hadoop", "type": "TEMPORARY" } ], "security": { "type": "KERBEROS", "kerberos_descriptor" : { -- Kerberos Descriptor Changes --- } }, "Clusters": { "cluster_name": "my_cluster" } }
Created on 02-01-2017 11:04 PM
@Robert Levas thanks for that pointer, however here customer would like to enable the Kerberos after some time with custom principals names.