Member since
05-07-2019
4
Posts
2
Kudos Received
0
Solutions
04-15-2025
11:15 AM
Hi @satvaddi , If you are running in a Ranger RAZ enabled environment you don't need all these settings: > --conf "spark.hadoop.hadoop.security.authentication=KERBEROS" \ > --conf "spark.hadoop.hadoop.security.authorization=true" \ > --conf "spark.hadoop.fs.s3a.delegation.token.binding=org.apache.knox.gateway.cloud.idbroker.s3a.IDBDelegationTokenBinding" \ > --conf "spark.hadoop.fs.s3a.idb.auth.token.enabled=true" \ > --conf "spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider" \ > --conf "spark.hadoop.fs.s3a.security.credential.provider.path=jceks://hdfs/user/infa/knox_credentials.jceks" \ > --conf "spark.hadoop.fs.s3a.endpoint=s3.amazonaws.com" \ > --conf "spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem" \ To me it looks like you are bypassing Raz by setting this parameter: > --conf "spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider" \ This, I would check whether the instance profile (IAM Role attached to the cluster) does not have too much privileges. Like access to data. This should be controlled in Ranger instead.
... View more
12-10-2024
04:33 AM
1 Kudo
Do you know how to keep it from expiring or renew the token from within Nifi?
... View more
12-12-2022
05:06 AM
@wbivp If you are using kerberos authentication, you also need to provide "kerberos_service_name". Try setting kerberos_service_name: impala
... View more