Member since
05-22-2019
70
Posts
24
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1146 | 12-12-2018 09:05 PM | |
1159 | 10-30-2018 06:48 PM | |
1596 | 08-23-2018 11:17 PM | |
7272 | 10-07-2016 07:54 PM | |
1926 | 08-18-2016 05:55 PM |
10-28-2016
06:02 PM
Just to confirm my understanding - permissions to Phoenix tables still need to be controlled directly in HBase with grants to the underlying HBase tables? Would we have to grant for the secondary index tables as well? In our situation, we will only allow admins to create tables, and then regular users can read or write to them. So we would use a superuser to create the tables and then grant RWX access to the underlying table in Hbase to normal users. Also, the namespace support is, I understand, available from Phoenix 4.7+
... View more
10-28-2016
05:01 PM
There are numerous references on how SYSTEM tables are created the first time that a user logs in to Phoenix. Thus, they would require Create and Write permissions in the HBase default namespace. 1. Does this happen for each user? 2. Does this happen for each time they login? I ask this because we have users that were encountering "Insufficient permissions" errors. Then granted them 'RWXCA' permissions in HBase. Then everything worked well. After the first login we tried removing permissions (trying to create a read-only user). However, when we removed the 'CW' permissions, they could no longer login and starting getting Insufficient Permissions error. Error: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient permissions (user=svc.xyx@FOO.BAR, scope=SYSTEM.CATALOG, family=, action=CREATE)
grant 'xyz', 'RWXCA', '@default'
-- All good!
grant 'xyz', 'RX', '@default'
-- No good! Even after first time
3. DO THE USERS ALWAYS HAVE TO HAVE 'CW' access to Hbase default namespace? And if so, what is the best way to control table-level security in Phoenix?
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix
10-07-2016
07:54 PM
Nothing like writing something down to make you find the answer! This has an open jira to correct in Ranger, where the database name is not being passed correctly to determine permissions for temporary functions. This is not the case for permanent functions. The current workaround is to specify "*" for the database name in the function policy.
... View more
10-07-2016
07:35 PM
With Ranger permissions for Hive, I can create a permanent function, but not a temporary function. I have Ranger installed with Hive Doas=false (queries run as hive.) Only Hive policies setup with no HDFS policies. There are two policies for this user and this database for 1) All Tables in this database granting all permissions 2) All Functions in this database granting all permissions With the TEMPORARY keyword, getting an access denied error. Remove that and no problem. Are there specific permissions needed for temp functions? Maybe in some other database? -- Create Temporary Function - No Joy!
0: jdbc:hive2://evdp-lnx-hrt014.aginity.local> CREATE TEMPORARY FUNCTION FN_UNIQUE_NUMBER
0: jdbc:hive2://evdp-lnx-hrt014.aginity.local> AS 'com.screamingweasel.amp.hive.udf.UniqueNumberGenerator'
0: jdbc:hive2://evdp-lnx-hrt014.aginity.local> USING JAR 'hdfs:///tmp/custom-hive-udf-2.4.1.jar';
Error: Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [batyr_amp_admin] does not have [CREATE] privilege on [amp_unique_number] (state=42000,code=40000)
-- Create permanent function no problem
0: jdbc:hive2://evdp-lnx-hrt014.aginity.local> CREATE FUNCTION FN_UNIQUE_NUMBER
0: jdbc:hive2://evdp-lnx-hrt014.aginity.local> AS 'com.screamingweasel.amp.hive.udf.UniqueNumberGenerator'
0: jdbc:hive2://evdp-lnx-hrt014.aginity.local> USING JAR 'hdfs:///tmp/custom-hive-udf-2.4.1.jar';
INFO : converting to local hdfs:///tmp/custom-hive-udf-2.4.1.jar
INFO : Added [/tmp/68037a68-6f5b-40ef-8073-fefdaa6319be_resources/custom-hive-udf-2.4.1.jar] to class path
INFO : Added resources: [hdfs:///tmp/custom-hive-udf-2.4.1.jar]
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Ranger
10-07-2016
05:58 PM
WINNER! That did the trick. Changed ranger.usersync.ldap.username.caseconversion=lower ranger.usersync.ldap.groupname.caseconversion=lower Restarted Ranger (which performs usersync) All group names are now lowercase in both Ranger and HDFS
... View more
10-07-2016
03:31 PM
In Ranger, group name is 'BATYR_AMP_ADMINS' (both under groups and when added to policy) It is uppercase in Active Directory, and shows uppercase in the usersync.log ranger.usersync.ldap.groupname.caseconversion=none HOWEVER, As you can see in the main question, the hdfs groups command and linux id command show lowercase. Is this expected behaviour?
... View more
10-07-2016
02:28 PM
'domain users' is one of the groups that all users are associated with. However, it is NOT the one we are using for the policy. That group is 'batyr_amp_admins' (underscores and no spaces.) Would this still be an issue?
... View more
10-07-2016
01:58 PM
2 Kudos
Having an issue with applying Ranger policy permissions through groups. I see that there are several questions on this. I am having the same basic issue--Policies get applied when user is specified, but not using a group. I have gone through all of the debugging steps suggested in the questions, but still having issues. SSSD - We do have this running and are able to see the groups (note: NN, HS2, and Ranger are all on this same host) $ hdfs groups batyr_amp_admin
batyr_amp_admin : domain users batyr_amp_admins
$ id batyr_amp_admin
uid=1080619417(batyr_amp_admin) gid=1080600513(domain users) groups=1080600513(domain users),1080619409(batyr_amp_admins)
QUESTION: If SSSD is running, do you ALSO have to setup the core-site.mapping? From Hiveserver2.log 2016-10-07 09:46:55,322 WARN [HiveServer2-Handler-Pool: Thread-5841]: thrift.ThriftCLIService (ThriftCLIService.java:ExecuteStatement(512)) - Error executing statement:
org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [batyr_amp_admin] does not have [USE] privilege on [amp_land]
at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:335)
at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:148)
at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:226)
at org.apache.hive.service.cli.operation.Operation.run(Operation.java:276)
at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:468)
at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:456)
at org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:298)
at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:506)
at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1317)
at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1302)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:562)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hive.ql.security.authorization.plugin.HiveAccessControlException: Permission denied: user [batyr_amp_admin] does not have [USE] privilege on [amp_land]
at org.apache.ranger.authorization.hive.authorizer.RangerHiveAuthorizer.checkPrivileges(RangerHiveAuthorizer.java:412)
at org.apache.hadoop.hive.ql.Driver.doAuthorizationV2(Driver.java:855)
at org.apache.hadoop.hive.ql.Driver.doAuthorization(Driver.java:643)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:510)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:320)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1219)
at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1213)
at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:146)
... 15 more
... View more
Labels:
- Labels:
-
Apache Ranger
08-18-2016
05:55 PM
1 Kudo
I wanted to update this with the solution we arrived at. Very similar to answer by @Geoffrey Shelton Okot. Sqoop installs the MySql JDBC connector, which has a dependency to the OpenJdk version of Java. If you are using another JDK (like Oracle's) It thinks it is not there, so it goes ahead and installs OpenJdk. The solution we arrived at was to pre-install the mysql connector from the HDP-UTILS repo, and then Sqoop will not try to reinstall. /usr/bin/yum --disablerepo=* --enablerepo=HDP-UTILS* -d 0 -e 0 -y install mysql-connector-java
... View more