Support Questions

Find answers, ask questions, and share your expertise

Spark job in cdp 7.2.18 RangerRaz not generating logs

avatar
New Contributor

in CDP Public Cloud 7218 RangerRaz Cluster.Below spark job is not generating logs though it Run Successfully.

[cloudbreak@ajay7218razdh-master0 ~]$ spark3-submit \
> --master yarn \
> --deploy-mode cluster \
> --conf "spark.hadoop.hadoop.security.authentication=KERBEROS" \
> --conf "spark.hadoop.hadoop.security.authorization=true" \
> --conf "spark.hadoop.fs.s3a.delegation.token.binding=org.apache.knox.gateway.cloud.idbroker.s3a.IDBDelegationTokenBinding" \
> --conf "spark.hadoop.fs.s3a.idb.auth.token.enabled=true" \
> --conf "spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider" \
> --conf "spark.hadoop.fs.s3a.security.credential.provider.path=jceks://hdfs/user/infa/knox_credentials.jceks" \
> --conf "spark.hadoop.fs.s3a.endpoint=s3.amazonaws.com" \
> --conf "spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem" \
> --conf "spark.driver.extraJavaOptions=-Djavax.net.debug=ssl:handshake" \
> --conf "spark.executor.extraJavaOptions=-Djavax.net.debug=ssl:handshake" \
> --class org.apache.spark.examples.SparkPi \
> /opt/cloudera/parcels/CDH-7.2.18-1.cdh7.2.18.p0.51297892/jars/spark-examples_2.12-3.4.1.7.2.18.0-641.jar 100

 

 

 

 

 

4 REPLIES 4

avatar
Expert Contributor

Hi @satvaddi ,

Please follow the below actions to setup the policies in RAZ for Spark. Spark doesnt have any plugin of its own so the data accessed on S3 will be logged. Other than that the table metadata will be logged from HMS.

Running the create external table [***table definition***] location ‘s3a://bucket/data/logs/tabledata’ command in Hive requires the following Ranger policies:

  • An S3 policy in the cm_s3 repo on s3a://bucket/data/logs/tabledata for hive user to perform recursive read/write.
  • An S3 policy in the cm_s3 repo on s3a://bucket/data/logs/tabledata for the end user.
  • A Hive URL authorization policy in the Hadoop SQL repo on s3a://bucket/data/logs/tabledata for the end user.

Access to the same external table location using Spark shell requires an S3 policy (Ranger policy) in the cm_s3 repo on s3a://bucket/data/logs/tabledata for the end user.

avatar
New Contributor

Hi @AyazHussain ,

I have updated/added the suggested changes in Ranger policies still the issue is occuring.
Please find the attched screenshots for reference.

image.png
image.png

Regards,
Sushant

avatar
Expert Contributor

Hi @Jaguar ,

Can you please get the RM logs and grep with Ranger in RM and check that.
Do you have the cm_yarn service plugin setup in Ranger?

avatar
New Contributor

Hi @satvaddi ,

If you are running in a Ranger RAZ enabled environment you don't need all these settings:
> --conf "spark.hadoop.hadoop.security.authentication=KERBEROS" \
> --conf "spark.hadoop.hadoop.security.authorization=true" \
> --conf "spark.hadoop.fs.s3a.delegation.token.binding=org.apache.knox.gateway.cloud.idbroker.s3a.IDBDelegationTokenBinding" \
> --conf "spark.hadoop.fs.s3a.idb.auth.token.enabled=true" \
> --conf "spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider" \
> --conf "spark.hadoop.fs.s3a.security.credential.provider.path=jceks://hdfs/user/infa/knox_credentials.jceks" \
> --conf "spark.hadoop.fs.s3a.endpoint=s3.amazonaws.com" \
> --conf "spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem" \

 

To me it looks like you are bypassing Raz by setting this parameter:
> --conf "spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider" \

 

This, I would check whether the instance profile (IAM Role attached to the cluster) does not have too much privileges. Like access to data. This should be controlled in Ranger instead.