- The role you are mapping to the user in the IDBroker mapping section has the right S3 bucket specified? Or are you using the same bucket created during the DataLake deployment?
- Can you also make sure that Spark is configured to point to the S3 bucket . For Spark, it is required to define the S3 bucket name in the following property "spark.yarn.access.hadoopFileSystems".
Example: If using a DataHub cluster, Access to the DH in the Management Console > CM-UI > Clusters > Spark > Configurations > Create a file names "spark-defaults.conf" or update the existing file with the property:
DL--Manage Access-- IDBroker Mappings -- edit -- It was given the Data Access Role. DH --Manage Access-- Assigned your self required roles
We hope your Q concerning the Zeppelin Access issue is addressed by our 06/21 Post. As such, We shall mark the Post as Resolved. If you have any concerns, Feel free to update the Post & we shall get back to you accordingly.