Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Zeppelin Cannot access - Public Cloud

avatar
New Contributor

Hi

 

I am having a challenge with the access control for credentials while trying to execute the spark process in Zeppelin, can anyone help?

 

ERROR idbroker.AbstractIDBClient: Cloud Access Broker response: { "error": "There is no mapped role for the group(s) associated with the authenticated user.",

 

Regards

Lakshmi Segu

 

1 ACCEPTED SOLUTION

avatar
Super Collaborator

Hello @LakshmiSegu 

 

We hope your query was addressed by Shehbaz's response. In Summary,

(I) Ensure you Username has an IDBroker Mapping (Actions > Manage Access > IDBroker Mappings). 

(II) Include the "spark.yarn.access.hadoopFileSystems" Parameter to point to the S3 Path [1]. 

 

Regards, Smarak

 

[1] https://docs.cloudera.com/runtime/7.2.15/developing-spark-applications/topics/spark-s3.html

 

View solution in original post

3 REPLIES 3

avatar
Master Collaborator

Hello Lakshmi

 

- The role you are mapping to the user in the IDBroker mapping section has the right S3 bucket specified? Or are you using the same bucket created during the DataLake deployment?
	- Can you also make sure that Spark is configured to point to the S3 bucket [1]. For Spark, it is required to define the S3 bucket name in the following property "spark.yarn.access.hadoopFileSystems".
	Example: If using a DataHub cluster, Access to the DH in the Management Console > CM-UI > Clusters > Spark > Configurations > Create a file names "spark-defaults.conf" or update the existing file with the property:
	spark.yarn.access.hadoopFileSystems=s3a://bucket_name

or 

DL--Manage Access-- IDBroker Mappings -- edit -- It was given the Data Access Role.
DH --Manage Access-- Assigned your self required roles

 

avatar
Super Collaborator

Hello @LakshmiSegu 

 

We hope your query was addressed by Shehbaz's response. In Summary,

(I) Ensure you Username has an IDBroker Mapping (Actions > Manage Access > IDBroker Mappings). 

(II) Include the "spark.yarn.access.hadoopFileSystems" Parameter to point to the S3 Path [1]. 

 

Regards, Smarak

 

[1] https://docs.cloudera.com/runtime/7.2.15/developing-spark-applications/topics/spark-s3.html

 

avatar
Super Collaborator

Hello @LakshmiSegu 

 

We hope your Q concerning the Zeppelin Access issue is addressed by our 06/21 Post. As such, We shall mark the Post as Resolved. If you have any concerns, Feel free to update the Post & we shall get back to you accordingly.

 

Regards, Smarak