Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Create HDFS directory using ambari blueprint

avatar
Rising Star

I am experimenting with ambari blueprints and I would like to know if there is a way to get few hdfs directories created as part of blueprint deployment. Is there a way to ask ambari, or hadoop, create certain directories I specify in the blueprint? and where should I specify them in the blueprint to accomplish this, if it is possible.

1 ACCEPTED SOLUTION

avatar
Master Mentor

@Theyaa Matti

I do not see such option with Ambari Blueprints.

The whole purpose and aim of having blueprint is that it provides a declarative definition of a cluster. With a Blueprint, you specify a Stack, the Component layout and the Configurations to materialize a Hadoop cluster instance (via a REST API) without having to use the Ambari Cluster Install Wizard.

The requirement that you have like "few hdfs directories" is something Post cluster setup.

Please refer to the following link to know more about what all things can be achieved using ambari blueprints: https://cwiki.apache.org/confluence/display/AMBARI/Blueprints#Blueprints-Introduction

View solution in original post

2 REPLIES 2

avatar
Master Mentor

@Theyaa Matti

I do not see such option with Ambari Blueprints.

The whole purpose and aim of having blueprint is that it provides a declarative definition of a cluster. With a Blueprint, you specify a Stack, the Component layout and the Configurations to materialize a Hadoop cluster instance (via a REST API) without having to use the Ambari Cluster Install Wizard.

The requirement that you have like "few hdfs directories" is something Post cluster setup.

Please refer to the following link to know more about what all things can be achieved using ambari blueprints: https://cwiki.apache.org/confluence/display/AMBARI/Blueprints#Blueprints-Introduction

avatar
Rising Star

Thank you for the reply. But, some services require hdfs folders to be created to run, like spark2. Part of using blueprint so I can get a working cluster with one command.