Created 03-31-2025 10:08 PM
in CDP Public Cloud 7218 RangerRaz Cluster.Below spark job is not generating logs though it Run Successfully.
[cloudbreak@ajay7218razdh-master0 ~]$ spark3-submit \
> --master yarn \
> --deploy-mode cluster \
> --conf "spark.hadoop.hadoop.security.authentication=KERBEROS" \
> --conf "spark.hadoop.hadoop.security.authorization=true" \
> --conf "spark.hadoop.fs.s3a.delegation.token.binding=org.apache.knox.gateway.cloud.idbroker.s3a.IDBDelegationTokenBinding" \
> --conf "spark.hadoop.fs.s3a.idb.auth.token.enabled=true" \
> --conf "spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider" \
> --conf "spark.hadoop.fs.s3a.security.credential.provider.path=jceks://hdfs/user/infa/knox_credentials.jceks" \
> --conf "spark.hadoop.fs.s3a.endpoint=s3.amazonaws.com" \
> --conf "spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem" \
> --conf "spark.driver.extraJavaOptions=-Djavax.net.debug=ssl:handshake" \
> --conf "spark.executor.extraJavaOptions=-Djavax.net.debug=ssl:handshake" \
> --class org.apache.spark.examples.SparkPi \
> /opt/cloudera/parcels/CDH-7.2.18-1.cdh7.2.18.p0.51297892/jars/spark-examples_2.12-3.4.1.7.2.18.0-641.jar 100
Created 04-02-2025 10:07 PM
Hi @satvaddi ,
Please follow the below actions to setup the policies in RAZ for Spark. Spark doesnt have any plugin of its own so the data accessed on S3 will be logged. Other than that the table metadata will be logged from HMS.
Running the create external table [***table definition***] location ‘s3a://bucket/data/logs/tabledata’ command in Hive requires the following Ranger policies:
Access to the same external table location using Spark shell requires an S3 policy (Ranger policy) in the cm_s3 repo on s3a://bucket/data/logs/tabledata for the end user.
Created 04-14-2025 06:44 AM
Hi @AyazHussain ,
I have updated/added the suggested changes in Ranger policies still the issue is occuring.
Please find the attched screenshots for reference.
Regards,
Sushant
Created 04-15-2025 01:25 AM
Hi @Jaguar ,
Can you please get the RM logs and grep with Ranger in RM and check that.
Do you have the cm_yarn service plugin setup in Ranger?
Created 04-15-2025 11:15 AM
Hi @satvaddi ,
If you are running in a Ranger RAZ enabled environment you don't need all these settings:
> --conf "spark.hadoop.hadoop.security.authentication=KERBEROS" \
> --conf "spark.hadoop.hadoop.security.authorization=true" \
> --conf "spark.hadoop.fs.s3a.delegation.token.binding=org.apache.knox.gateway.cloud.idbroker.s3a.IDBDelegationTokenBinding" \
> --conf "spark.hadoop.fs.s3a.idb.auth.token.enabled=true" \
> --conf "spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider" \
> --conf "spark.hadoop.fs.s3a.security.credential.provider.path=jceks://hdfs/user/infa/knox_credentials.jceks" \
> --conf "spark.hadoop.fs.s3a.endpoint=s3.amazonaws.com" \
> --conf "spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem" \
To me it looks like you are bypassing Raz by setting this parameter:
> --conf "spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider" \
This, I would check whether the instance profile (IAM Role attached to the cluster) does not have too much privileges. Like access to data. This should be controlled in Ranger instead.