Member since
06-20-2016
308
Posts
103
Kudos Received
29
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1956 | 09-19-2018 06:31 PM | |
1442 | 09-13-2018 09:33 PM | |
1412 | 09-04-2018 05:29 PM | |
4415 | 08-27-2018 04:33 PM | |
3484 | 08-22-2018 07:46 PM |
01-24-2017
11:03 PM
@Dezka Dex looks like you are trying to use 443 port, can you please use different port number above 1024 . Please try 8443 port.
... View more
01-24-2017
10:57 PM
@Dezka Dex This does not look like SSL error. without SSL ( with http) are you able to start the server successfully?
... View more
01-24-2017
08:33 PM
2 Kudos
When you build a cluster and enable Kerberos , by default Ambari creates/uses default principal names. here is the steps to enable the Kerberos using Blueprint/APIs.
1. Blueprint: Create a blueprint.
API- POST - http://apappu3.hdp.com:8080/api/v1/blueprints/cbp
Payload -
{
"configurations" : [
{
"zoo.cfg" : {
"properties" : {
"dataDir" : "/data/hadoop/zookeeper",
"initLimit": "5",
"syncLimit": "2",
"maxClientCnxns": "0"
}
}
}
],
"host_groups" : [
{
"components" : [
{
"name" : "HIVE_SERVER"
},
{
"name" : "HISTORYSERVER"
},
{
"name" : "HIVE_METASTORE"
},
{
"name" : "ZOOKEEPER_CLIENT"
},
{
"name" : "WEBHCAT_SERVER"
},
{
"name" : "ZOOKEEPER_SERVER"
},
{
"name" : "APP_TIMELINE_SERVER"
},
{
"name" : "RESOURCEMANAGER"
},
{
"name" : "SLIDER"
},
{
"name" : "ATLAS_CLIENT"
},
{
"name" : "ATLAS_SERVER"
},
{
"name" : "NAMENODE"
},
{
"name" : "SECONDARY_NAMENODE"
}
,
{
"name" : "DATANODE"
},
{
"name" : "NODEMANAGER"
}
,
{
"name" : "HISTORYSERVER"
},
{
"name" : "KAFKA_BROKER"
}
,
{
"name" : "SPARK_CLIENT"
}
,
{
"name" : "SPARK_JOBHISTORYSERVER"
}
,
{
"name" : "SPARK_THRIFTSERVER"
},
{
"name" : "KERBEROS_CLIENT"
}
],
"name" : "host-group-2",
"cardinality" : "1"
}
],
"Blueprints" : {
"stack_name" : "HDP",
"stack_version" : "2.5"
}
}
2. Create the cluster with API:
API- POST - http://apappu3.hdp.com:8080/api/v1/clusters/test
Payload
{
"blueprint" : "cbp",
"config_recommendation_strategy" : "ALWAYS_APPLY",
"host_groups" :[
{
"name" : "host-group-2",
"configurations" : [
{
"hive-site" : {
"javax.jdo.option.ConnectionPassword" : "admin"
}
}
],
"hosts" : [
{
"fqdn" : "apappu3.hdp.com"
}
]
}
]
3. Now Update the kerberoes descriptor to have your own principals:
Ex: here we are updating spark principal name.
API - POST - http://apappu3.hdp.com:8080/api/v1/clusters/test/artifacts/kerberos_descriptor
Payload-
{
"artifact_data" : {
"services" : [ {
"name" : "SPARK",
"identities" : [
{
"principal" : {
"configuration" : "spark-defaults/spark.history.kerberos.principal",
"type" : "user",
"local_username" : "spark",
"value" : "spark@HDSND026.xxxx.xxx.COM"
},
"name" : "sparkuser"
}
]
}]
}
}
4. Update default kerberos configurations using
API call: PUT - http://apappu3.hdp.com:8080/api/v1/clusters/test
[
{
"Clusters": {
"desired_config": {
"type": "krb5-conf",
"tag": "version1",
"properties": {
"manage_krb5_conf" : "true",
"conf_dir" : "/etc",
"content" : "\n[libdefaults]\n renew_lifetime = 7d\n forwardable = true\n default_realm = {{realm}}\n ticket_lifetime = 24h\n dns_lookup_realm = false\n dns_lookup_kdc = false\n #default_tgs_enctypes = {{encryption_types}}\n #default_tkt_enctypes = {{encryption_types}}\n\n{% if domains %}\n[domain_realm]\n{% for domain in domains.split(',') %}\n {{domain}} = {{realm}}\n{% endfor %}\n{% endif %}\n\n[logging]\n default = FILE:/var/log/krb5kdc.log\n admin_server = FILE:/var/log/kadmind.log\n kdc = FILE:/var/log/krb5kdc.log\n\n[realms]\n {{realm}} = {\n admin_server = {{admin_server_host|default(kdc_host, True)}}\n kdc = {{admin_server_host|default(kdc_host, True)}}\n }\n\n{# Append additional realm declarations below #}",
"domains" : ".apache.com,apache.com"
}
}
}
},
{
"Clusters": {
"desired_config": {
"type": "kerberos-env",
"tag": "version1",
"properties": {
"realm" : "AMBARI.APACHE.ORG",
"kdc_type" : "mit-kdc",
"kdc_host" : "apappu3.hdp.com",
"admin_server_host" : "apappu3.hdp.com",
"encryption_types" : "aes des3-cbc-sha1 rc4 des-cbc-md5",
"ldap_url" : "",
"container_dn" : ""
}
}
}
}
]
5. Start kerberosing the cluster
PUT - http://apappu3.hdp.com:8080/api/v1/clusters/test
{
"session_attributes" : {
"kerberos_admin" : {
"principal" : "admin/admin",
"password" : "admin"
}
},
"Clusters": {
"security_type" : "KERBEROS"
}
}
6. Restart all services once step5 is completed.
API to Stop/Start services.
STOP - PUT - http://apappu3.hdp.com:8080/api/v1/clusters/test/services {"RequestInfo":{"context":"Stop Service"},"Body":{"ServiceInfo":{"state":"INSTALLED"}}}
START - PUT - http://apappu3.hdp.com:8080/api/v1/clusters/test/services {"ServiceInfo": {"state" : "STARTED"}}
Note: I have tried and tested this in Ambari-2.4.x
... View more
Labels:
01-24-2017
06:01 PM
@Sundara Palanki As for as Ambari database is concern - you can update hostcomponentstate table to point it to 2.5.3 version and then try.
... View more
01-24-2017
05:48 PM
7 Kudos
Usually service can be removed using API calls, but if the service is inconsistent state then API's does not work. so only way to delete is by running SQL queries. here is the list of steps to delete KNOX service. 1. delete from serviceconfigmapping where service_config_id in (select service_config_id from serviceconfig where service_name like '%KNOX%') 2. delete from confgroupclusterconfigmapping where config_type like '%knox%' 3. delete from clusterconfig where type_name like '%knox%' 4. delete from clusterconfigmapping where type_name like '%knox%' 5. delete from serviceconfig where service_name = 'KNOX' 6. delete from servicedesiredstate where service_name = 'KNOX' 7. delete from hostcomponentdesiredstate where service_name = 'KNOX' 8. delete from hostcomponentstate where service_name = 'KNOX' 9.delete from servicecomponentdesiredstate where service_name = 'KNOX' 10.delete from clusterservices where service_name = 'KNOX' 11. DELETE from alert_history where alert_definition_id in ( select definition_id from alert_definition where service_name = 'KNOX') 12.DELETE from alert_notice where history_id in ( select alert_id from alert_history where alert_definition_id in ( select definition_id from alert_definition where service_name = 'KNOX')) 13.DELETE from alert_definition where service_name like '%KNOX%' Note1: I have tried and tested this in Ambari 2.4.x Note2: Above queries are case sensitive - so use Upper/Lower case for service name.
... View more
Labels:
01-21-2017
12:47 AM
@AAV AAV It looks like your cluster is not pointing to correct cluster version. please check clusterstate table and make sure it is pointing to HDP 2.3 version.
... View more
01-20-2017
11:51 PM
@sreehari takkelapati From HDP2.5, Ranger does not support audit to DB - so not sure looking at the DB is for audit data is good idea.
... View more
01-20-2017
09:08 PM
2 Kudos
@PJ Yes, your correct. you can modify HADOOP_DATANODE_OPTS parameters. Ambari will prompt if Restart is required. In this case, yes restart is needed
... View more
01-20-2017
06:53 PM
1 Kudo
@Neil Watson You can with following REST API call curl -u admin:admin -H 'X-Requested-By:admin' -X PUT '<a href="http://localhost:8080/api/v1/clusters/cc'">http://localhost:8080/api/v1/clusters/clustername'</a> -d '{ "Clusters": { "desired_service_config_versions": { "service_name" : "HDFS", "service_config_version" : 1, "service_config_version_note" : "REST API rollback to service config version 1 (HDFS)" } } }' In this service_config_version - is the old version you would like to revert to.
... View more
01-20-2017
01:33 AM
@Karan Alang Yes, you can do that.
... View more