Member since
07-05-2022
8
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3489 | 03-19-2025 07:14 PM |
07-30-2025
07:42 AM
bc3dcd485adfa1c339eab38f1516c6c5 >> These alpha numeric related to tablet from kudu, region from habse or container for ozone, did you get a chance to check recon ui
... View more
04-15-2025
11:15 AM
Hi @satvaddi , If you are running in a Ranger RAZ enabled environment you don't need all these settings: > --conf "spark.hadoop.hadoop.security.authentication=KERBEROS" \ > --conf "spark.hadoop.hadoop.security.authorization=true" \ > --conf "spark.hadoop.fs.s3a.delegation.token.binding=org.apache.knox.gateway.cloud.idbroker.s3a.IDBDelegationTokenBinding" \ > --conf "spark.hadoop.fs.s3a.idb.auth.token.enabled=true" \ > --conf "spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider" \ > --conf "spark.hadoop.fs.s3a.security.credential.provider.path=jceks://hdfs/user/infa/knox_credentials.jceks" \ > --conf "spark.hadoop.fs.s3a.endpoint=s3.amazonaws.com" \ > --conf "spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem" \ To me it looks like you are bypassing Raz by setting this parameter: > --conf "spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider" \ This, I would check whether the instance profile (IAM Role attached to the cluster) does not have too much privileges. Like access to data. This should be controlled in Ranger instead.
... View more
03-19-2025
07:14 PM
TLS Certificate Alias - gateway-identity1 has been already set. beeline string is working beeline -u "jdbc:hive2://invgcscdp71901.informatica.com:8443/default;ssl=true;sslTrustStore=/var/lib/knox/gateway/data/security/gateway-client-trust.jks;trustStorePassword=changeit;transportMode=http;httpPath=gateway/cdp-proxy-api/hive;user=admin;password=admin"
... View more