Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Ranger and creating database pointing to S3 not working

avatar
Expert Contributor

Unable to create table pointing to S3 after enabling Ranger.

This is database we created before enabling Ranger.

SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
SET fs.s3a.access.key=xxxxxxx;
SET fs.s3a.secret.key=yyyyyyyyyyyyyyy;


CREATE DATABASE IF NOT EXISTS backup_s3a1
COMMENT "s3a schema test"
LOCATION "s3a://gd-de-dp-db-hcat-backup-schema/";

After Ranger was enabled, we try to create another database but it is throwing error.

0: jdbc:hive2://usw2dxdpmn01.local:> SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
Error: Error while processing statement: Cannot modify fs.s3a.impl at runtime. It is not in list of params that are allowed to be modified at runtime (state=42000,code=1)

How do I whitelist the fs.s3* parameters in Ranger ?

1 ACCEPTED SOLUTION

avatar
Expert Contributor

I resolved the problem by adding this configuration in custom-hiveserver2-site.xml

hive.security.authorization.sqlstd.confwhitelist.append=fs\.s3a\..*|fs\.s3n\..* |

View solution in original post

12 REPLIES 12

avatar
Super Collaborator

I am not sure if this is Ranger related. Could you please provide hiveserver2.log?

avatar

@Anandha L Ranganathan

I don't know if this will help, but you could try setting the parameters in the xml files rather than at runtime.

http://hortonworks.github.io/hdp-aws/s3-security/index.html#configuring-authentication

avatar

Try using a configuration file that stores your AWS credentials. Follow the instructions here:

https://hortonworks.github.io/hdp-aws/s3-security/#create-a-credential-file

avatar
Expert Contributor

@Binu Mathew

I am getting error saying AWS credential from any provider in the chain error.

I am able to read files from S3 by directly passing access and secret key. 

[hdfs@usw2dxdpmn01 root]$ hadoop fs -Dfs.s3a.access.key=xxxxxxxxxxxx -Dfs.s3a.secret.key=YYYYYYYYYYYYYYY -ls s3a://gd-data-stage/
Found 7 items
drwxrwxrwx   -          0 1970-01-01 00:00 s3a://gd-data-stage/cluster-db
drwxrwxrwx   -          0 1970-01-01 00:00 s3a://gd-data-stage/user
drwxrwxrwx   -          0 1970-01-01 00:00 s3a://gd-data-stage/ut1-upload



Then creating credential file. 

[hdfs@usw2dxdpmn01 root]$ hadoop credential create fs.s3a.access.key -value xxxxxxxxxxxx     -provider jceks://file/tmp/gd-data-stage.jceks
fs.s3a.access.key has been successfully created.
org.apache.hadoop.security.alias.JavaKeyStoreProvider has been updated.




[hdfs@usw2dxdpmn01 root]$ hadoop credential create fs.s3a.secret.key -value YYYYYYYYYYYYYYY  -provider  jceks://file/tmp/gd-data-stage.jceks
fs.s3a.secret.key has been successfully created.
org.apache.hadoop.security.alias.JavaKeyStoreProvider has been updated.




[hdfs@usw2dxdpmn01 root]$  hadoop credential list -provider jceks://file/tmp/gd-data-stage.jceks
Listing aliases for CredentialProvider: jceks://file/tmp/gd-data-stage.jceks
fs.s3a.secret.key
fs.s3a.access.key






[hdfs@usw2dxdpmn01 root]$ hadoop fs  -Dhadoop.security.credential.provider.path=jceks://file/tmp/gd-data-stage.jceks -ls s3a://gd-data-stage
-ls: Fatal internal error
com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain
	at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:117)
	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3521)
	at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1031)
	at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:994)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:297)
	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
	at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
	at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
	at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:235)
	at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:218)
	at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
	at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
	at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
	at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)

avatar
Expert Contributor

@Dominika Bialek , @Binu Mathew,

I configured the credentials in the core-site.xml and always returns "undefined" when I am trying to see the values using below commands. This is in our "pre-dev" environment and Ranger is enabled. In our other environment where Ranger is not installed , we are not facing this problem.

0: jdbc:hive2://usw2dxdpmn01:10010> set  fs.s3a.impl;
+-----------------------------------------------------+--+
|                         set                         |
+-----------------------------------------------------+--+
| fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem  |
+-----------------------------------------------------+--+
1 row selected (0.006 seconds)
0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key;
+---------------------------------+--+
|               set               |
+---------------------------------+--+
| fs.s3a.access.key is undefined  |
+---------------------------------+--+
1 row selected (0.005 seconds)
0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.secret.key;
+---------------------------------+--+
|               set               |
+---------------------------------+--+
| fs.s3a.secret.key is undefined  |
+---------------------------------+--+
1 row selected (0.005 seconds)

avatar

@stevel Do you know if using S3 is supported in Ranger?

avatar
Super Collaborator

S3 is not supported in Ranger as of now

avatar
Expert Contributor

@Ramesh Mani

This is just hiveserver2 configuration. The underlying file system is untouched. My expectation is Hive should work as usual. Please correct me if my understanding is incorrect after enabling Ranger.

avatar
Expert Contributor

I resolved the problem by adding this configuration in custom-hiveserver2-site.xml

hive.security.authorization.sqlstd.confwhitelist.append=fs\.s3a\..*|fs\.s3n\..* |