Support Questions

Find answers, ask questions, and share your expertise
Check out our newest addition to the community, the Cloudera Data Analytics (CDA) group hub.

CDP on Azure: Creation failed (freelpa creation operation failed)


I'm trying to register an environment in CDP from using Azure, following this guide:


All the steps work well, except the registration itself, which gives a 'Creation failed' error. 





Even worse, the Environment and Data Lake cannot be deleted. Even the deletion fails for some sort of deadlock between the two, as they were nor provisioned correctly.


Any advice?


PS: it's worth to note that the guide comes with a video too (, in which a role is created in Azure by copy pasting a JSON provided by the CDP UI. This json is not there anymore in the guide nor in the UI itself, so I guess it's ok?


Thanks and best regards,


Cloudera Employee

Hi Valerio,


A few things to check:

1. If you go to the datalake tab in the UI: can you access the CM UI? The logs there should tell you more

2. This is most likely a bad combination of setup of your managed identity/storage account. Best way to know what's wrong is to send us screenshots of your managed identity/storage account setup in Azure portal + how you call them in the environment creation wizard in CDP.

Hi Paul,


thanks for your answer.


1 - I can, but since I tried to delete the Data Lake and I delete the associated Storage account in Azure, they are not available now. Perhaps they were, at the beginning. I should try again, I guess.


2 - Luckily I documented everything. The storage account doesn't exist anymore, but here are all the screenshots I took while I was following the guide step by step. Maybe you can spot something. Note: the storage account was created with a colleague's account, as he has privileges to do so, while CDP is under my own account.


(let me know if the link doesn't work)


Thanks and best regards,

Cloudera Employee

Hola Valerio,


Looking at Screenshot (34), it looks like you used the AssumerIdentity everywhere.

Instead, you should use a combination of Logger/Ranger/Assumer/DataAccess identities, as detailed here:


Could you try with the proper identity combination and see if that helps?

Hi Paul,


well, that's embarassing... 🙂 Thanks a lot! Will try tomorrow morning and update the thread.


Thanks and best regards,

Hi Paul,


eventually I managed to create the environment. Thanks! The old one is still there, apparently corrupted, and I'm unable to delete it. It's a bit annoying but I don't think it's a major deal.


However, I'm not getting some role error in the Data Hub creation. I used the Data Engineering for Azure template, but it fails with the following message: Cluster template install failed: [Command [Start], with id [1546334305] failed: Failed to start role., Command [Start], with id [1546334302] failed: Failed to start role., Command [Start], with id [1546334303] failed: Failed to start role.'


I attach a couple of useful screenshots. If I click on the 'full log file' link from the 'details' one, I get"

[Errno 2] No such file or directory: '/var/log/hue/runcpserver.log'

Cloudera Employee

Hi there,


Regarding your datahub failure, it may be due to the fact that your FQDN is too long. Could you try launching a cluster with a shorter name?


As for the environment not being deleted: what error are you facing when trying to delete it?

Hi Paul,


thanks for your answer.


For the errors, here they are. Please keep in mind the underlying Azure resource group doesn't exist anymore, therefore in a way I understand these errors now.

The thing is that I could not delete them right away, therefore I had to delete the resource group in Azure in order to free up the resources in our Azure subscription.

In all honesty, I don't remember if the errors were looking the same before I deleted the environment in Azure or not, but I remember the deletion failed, even if I tried 'forcing' it, on both sides.

By the way, clicking on 'Repair' doesn't trigger any action at this point.





As per the Data Hub, I managed to create it. However:

  1. even though the previous data hub 'failed' to be provisioned, I just realized it was fully instantiated on Azure, and it cost me about 80 euros for 2 days. I understand I could have looked, just in case, but since the data hub failed for a 'naive' error such as the length of the name, also apparently uncontrolled, one would expect the whole process to fail and the resources not to be created on Azure... I think this scenario could have been handled much better and avoid such a bad surprise for the user.
  2. I'm trying to run simple workloads, for example in Zeppelin, but:
    1. one of the provided examples starts with a %sh interpreter (a wget command)... However the shell interpreter is not even defined!
    2. If I try to run any Spark command, even "sc" in pyspark, I get the following error:

20/10/06 11:29:23 ERROR common.DefaultRequestExecutor: Error executing request: HTTP/1.1 403 Forbidden 20/10/06 11:29:23 ERROR idbroker.AbstractIDBClient: Cloud Access Broker response: { "error": "There is no mapped role for the group(s) associated with the authenticated user.", "auth_id": "csso_valeriodimatteo" }


I'm the only user, and I have the Environment Admin role... Is there anything else I should be doing before I can actually run some simple workload?


I understand these are many, and quite low-level questions, so please let me know if I can open a direct channel to get some support.


Thanks and best regards,

Cloudera Employee

Hi Valerio,


There is some mapping to be done to enable your permissions.

I think the best way for you to move forward would be to use the resources available to you:

1. Free training, e.g. 

2. Tutorials, e.g.

3. If you are a Cloudera customer, I do recommend to reach out to your account team. We have CDP experts that can help you quickly rather than asynchronously.

Hi Paul,


thanks for the links your provided. I'll try to have a look, hopefully I'll see if there is some step that I missed using the guide that I followed...


By the way, we are partners, not clients... Do we still get to have some CDP expert to help us (as you say, quickly rather than asynchronously)? It would be just what I was hoping for... 🙂


Best regards,

Cloudera Employee

Absolutely, we have a partner team that can work with you.


More info here:

Dear Paul,


after checking more documentation and guides online, I can't help but notice that the only thing that seems to be missing is the creation of a 'custom role' to be assigned to the app. Is this the 'mapping' you were referring to?


At this link ( in the Azure video, at minutes 5:00 you can see a JSON and at minute 6:55 this JSON is used to create and assign the custom role.

However, the guide online does not show this step, even though the JSON is visible in a screenshot (


As you can see in my own screenshot, then, the JSON in the current UI is not even visible.

Screenshot (61).png


 I can't seem to find any other missing step.


I had to delete the cluster in the meanwhile as every day was costing around 80 dollars of resources, so I was hoping to get more 'leads' on the error before I try again...


Thanks and best regards,

Cloudera Employee

Hi Valerio,


First, regarding the app role, I think the quick start doc page is out of date (I reported this to our doc team).

You do not need to create a custom role, as long as you create your credential app like this (replace subscriptionId with your ID):

az ad sp create-for-rbac \
    --name http://your-cloudbreak-app \
    --role Contributor \
    --scopes /subscriptions/{subscriptionId}



Secondly, did you run step 3 completely?

Specifically, after the quick start, make sure to run this in an Azure bash shell post quickstart deployment (replace YOUR_SUBSCRIPTION_ID and YOUR_RG with the values used in quickstart):

export STORAGEACCOUNTNAME=$(az storage account list -g $RESOURCEGROUPNAME|jq '.[]|.name'| tr -d '"')
export ASSUMER_OBJECTID=$(az identity list -g $RESOURCEGROUPNAME|jq '.[]|{"name","principalId":.principalId}|select(.name | test("AssumerIdentity"))|.principalId'| tr -d '"')
export DATAACCESS_OBJECTID=$(az identity list -g $RESOURCEGROUPNAME|jq '.[]|{"name","principalId":.principalId}|select(.name | test("DataAccessIdentity"))|.principalId'| tr -d '"')
export LOGGER_OBJECTID=$(az identity list -g $RESOURCEGROUPNAME|jq '.[]|{"name","principalId":.principalId}|select(.name | test("LoggerIdentity"))|.principalId'| tr -d '"')
export RANGER_OBJECTID=$(az identity list -g $RESOURCEGROUPNAME|jq '.[]|{"name","principalId":.principalId}|select(.name | test("RangerIdentity"))|.principalId'| tr -d '"')
# Assign Managed Identity Operator role to the assumerIdentity principal at subscription scope
az role assignment create --assignee $ASSUMER_OBJECTID --role 'f1a07417-d97a-45cb-824c-7a7467783830' --scope "/subscriptions/$SUBSCRIPTIONID"
# Assign Virtual Machine Contributor role to the assumerIdentity principal at subscription scope
az role assignment create --assignee $ASSUMER_OBJECTID --role '9980e02c-c2be-4d73-94e8-173b1dc7cf3c' --scope "/subscriptions/$SUBSCRIPTIONID"
# Assign Storage Blob Data Contributor role to the loggerIdentity principal at logs filesystem scope
az role assignment create --assignee $LOGGER_OBJECTID --role 'ba92f5b4-2d11-453d-a403-e96b0029c9fe' --scope "/subscriptions/$SUBSCRIPTIONID/resourceGroups/$RESOURCEGROUPNAME/providers/Microsoft.Storage/storageAccounts/$STORAGEACCOUNTNAME/blobServices/default/containers/logs"
# Assign Storage Blob Data Owner role to the dataAccessIdentity principal at logs/data filesystem scope
az role assignment create --assignee $DATAACCESS_OBJECTID --role 'b7e6dc6d-f1e8-4753-8033-0f276bb0955b' --scope "/subscriptions/$SUBSCRIPTIONID/resourceGroups/$RESOURCEGROUPNAME/providers/Microsoft.Storage/storageAccounts/$STORAGEACCOUNTNAME/blobServices/default/containers/data"
az role assignment create --assignee $DATAACCESS_OBJECTID --role 'b7e6dc6d-f1e8-4753-8033-0f276bb0955b' --scope "/subscriptions/$SUBSCRIPTIONID/resourceGroups/$RESOURCEGROUPNAME/providers/Microsoft.Storage/storageAccounts/$STORAGEACCOUNTNAME/blobServices/default/containers/logs"
# Assign Storage Blob Data Contributor role to the rangerIdentity principal at data filesystem scope
az role assignment create --assignee $RANGER_OBJECTID --role 'ba92f5b4-2d11-453d-a403-e96b0029c9fe' --scope "/subscriptions/$SUBSCRIPTIONID/resourceGroups/$RESOURCEGROUPNAME/providers/Microsoft.Storage/storageAccounts/$STORAGEACCOUNTNAME/blobServices/default/containers/data"

Let me know if that works out for you.

Dear Paul,

thanks for the answer.

Yes, I did this step as you can see in screenshot 59 of the previous attachment.

HOWEVER, now that you gave me your version, I notice the following:


In the video ( as well as the file that the guide makes me download ( the variables values are wrapped in double quotes, and the ObjectID are just the strings copied from the Azure UI. So, I used this format:

export SUBSCRIPTIONID="27a6aae6-ce60-4ae4-a06e-cfe9c1e824d4"
export RESOURCEGROUPNAME="cdp-poc"
export STORAGEACCOUNTNAME="cdppoccp"
export ASSUMER_OBJECTID="45fef2ef-c25b-45d9-baab-049249d252b1"
export DATAACCESS_OBJECTID="1df6f883-dde7-4c28-8961-cd37a2125273"
export LOGGER_OBJECTID="b86d8a21-fed0-4cf4-a723-936baa3736f4"
export RANGER_OBJECTID="edbe4750-e2a5-4932-9edd-6f9477429838"

Looking at your version, you build these programmatically, so I went to look again and actually, the example in the very same guide is different than the video and the template itself. It has this format, with no quotes, and the objectIds have a prefix made of Storage Account name + Identity ID.

export SUBSCRIPTIONID=jfs85ls8-sik8-8329-fq0m-jqo7v06dk5sy
export RESOURCEGROUPNAME=azure-quickstart-test1
export STORAGEACCOUNTNAME=cdpazureqs
export ASSUMER_OBJECTID=cdpazureqs-Assumer-jd85mvh9-u86n-8j2d-54dg-jd72j5ki1sd2
export DATAACCESS_OBJECTID=cdpazureqs-DataAccess-peyc86sk346c-yj12-ys89-ye5m-zt6wlv95fi23
export LOGGER_OBJECTID=cdpazureqs-Logger-f63ucn04-hf52-rq87-b6gd-v86fds9ptk3g
export RANGER_OBJECTID=cdpazureqs-Ranger-gc86d0uq-l6o4-vx67-qh87-1jf74l0cbeq7

 Might this difference be the cause of the issue? Which one is the right version?


PS: As a matter of fact, if I go to my cloudbreak app Manifest right now I see this...


"appRoles": []

Can this be a hint?
Thanks and best regards,

Cloudera Employee

Again, you have two issues here:

1. Making sure that your app has contributor role

2. Making sure that the identities you created with the quick start template have the right permissions


If you follow the instructions I gave you (create the proper app + run the script), it should work.

Hi Paul,


I've used the same commands that you shared.

Is there any way I can check if the contributor role is properly assigned before I go on and start the cluster yet again? That empty 'appRole' property in the manifest json makes me doubt, and it is empty even after creating a brand new app and running your script again.


Best regards,

Cloudera Employee
This looks like a permission issue on your Azure subscription at the time
of creation.
Did you manage to get involved with our partner program?

We would like to help you out by sharing screen rather than finding the
needle in the haystack here 🙂

Hi Paul,


I agree, a call with a shared screen would be better.

So yes, I'm already in the Partner Program and I also got the Partner Development License in order to do this very same exercise with CDP.

However I can't access the cases in the Support Portal (I get a 'restricted access' message).

If I look at the Partner Development Subscription I see that Support is included with Gold and Platinum, but we are Silver... Maybe it depends on this?


I contacted our Partner Sales Manager to see if he can help out.


Best regards,

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.