Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

How to use blueprints with pre-created accounts for services

avatar
Super Collaborator

Is there a way to use blueprints assuming we have pre-created service accounts (created in AD/Centrify), and if so, which I could use some help. I assume that in the blueprints I could add properties something like this:

"spark_user" : "svcspark",

"spark_group" : "svcspark"

Part 1 of this question -- Will that cause Spark to run as svcspark?

Also, related to this, I noticed there are now 3 options (checkboxes) in Ambari 2.5.0.3 which may relate to pre-created accounts. These are found under Misc when adding a service:

  1. Skip group modifications
  2. Have Ambari manage UIDs
  3. Whether to skip creating users and groups in a sysprepped cluster

Part 2 of this question -- Can these be set in blueprints and if so, how?

Our plan is to use blueprints but we need to create all service accounts in AD/Centrify before Ambari or services are installed.

Thanks

1 ACCEPTED SOLUTION

avatar
Master Guru

Hi @james.jones

The answer is Yes to both your questions.

Regarding Spark user and group, in "spark-env" block of "configurations" you can set exactly what you said:

"spark_user" : "svcspark",
"spark_group" : "svcspark"

and yes, Spark will run as svcspark. Regarding Part 2, those settings can be provided in the "cluster-env" block. Property names and defaults are

"ignore_groupsusers_create" : "false",
"override_uid" : "true",
"sysprep_skip_create_users_and_groups" : "false",

The best way to familiarize with these and other "obscure" properties is to export a blueprint from an existing cluster, and explore cluster-env and other config blocks. HTH.

View solution in original post

3 REPLIES 3

avatar
Master Guru

Hi @james.jones

The answer is Yes to both your questions.

Regarding Spark user and group, in "spark-env" block of "configurations" you can set exactly what you said:

"spark_user" : "svcspark",
"spark_group" : "svcspark"

and yes, Spark will run as svcspark. Regarding Part 2, those settings can be provided in the "cluster-env" block. Property names and defaults are

"ignore_groupsusers_create" : "false",
"override_uid" : "true",
"sysprep_skip_create_users_and_groups" : "false",

The best way to familiarize with these and other "obscure" properties is to export a blueprint from an existing cluster, and explore cluster-env and other config blocks. HTH.

avatar
Super Collaborator
@Predrag Minovic

Awesome! Thank you.

avatar
New Contributor

Can someone explain about these settings functionality? especially No.3, what does ambari do when we check them?

  1. Skip group modifications
  2. Have Ambari manage UIDs
  3. Whether to skip creating users and groups in a sysprepped cluster