Member since
01-27-2016
27
Posts
25
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1639 | 08-25-2016 08:52 AM | |
4468 | 02-16-2016 07:13 AM |
06-14-2016
08:38 AM
@dsharma Thank you
... View more
06-14-2016
05:19 AM
Windows' UID and GID values are 9-digit UID / GID values? Will it function properly when Ranger's authorization policies are dependent on it?
Some group names contain special characters. Are they going to cause any technical difficulties to the cluster, especially to Ranger?
... View more
Labels:
- Labels:
-
Apache Ranger
06-01-2016
12:11 PM
2 Kudos
Use Case : 1 . ADDING a member to another group and being able to manage them internally without having to deal with outside or additional products. 2 . To be able to easily determine what members reside in what groups instead of having to scroll down page after page to see what members are in what groups especially when you have hundreds of users to keep track of. 3 . To easily administer various groups without having the hassle of creating more and more Active Directory/LDAP associations and having to submit change control requests to other departments for something we should be able to administer on our own. Ranger User Sync Process supports reading users and group information from one of the following sources:
Unix
Text file - CSV or JSON format LADP/AD CSV Format : If the filename does
not end with .json, each line in the file will be treated as a delimiter
separated fields of the following format. Default delimiter is a comma; this
can be changed using configuration shown above. user-1,group-1,group-2,group-3
user-2,group-x,group-y,group-z
CSV File Format
e.g. UserGroupSyncFile.txt
"user21","group20","group218","group26","group27","group262","group242","group219","group23"
"user22","group20","group218","group26"
"user23","user24","group20","group218" To run it as Command
Line tool: java
-Dlogdir=/var/log/ranger/usersync -cp
/usr/hdp/current/ranger-usersync/dist/*:/usr/hdp/current/ranger-usersync/lib/*:/usr/hdp/current/ranger-usersync/conf
org.apache.ranger.unixusersync.process.FileSourceUserGroupBuilder
/tmp/UserGroupSyncFile.txt Steps : Create a group called solr_group and add certain users (imported from LDAP) into that group that we know will use SOLR. All the users are associated with the groups defined through LDAP and nothing else but we want to create additional groups and link users to those groups on Ranger. 1. Cluster with Ranger and configure with LDAP users. Here it is "packer". 2. Create a internal group on Ranger UI. Here it is "solr_group". 3. Edit an external LDAP user to add it to the group that we created. 4. Unable to edit the group field(greyed out) on Ranger UI for that LDAP user. [root@sandbox ~]# vi /tmp/ugsync.txt
[root@sandbox ~]# cat /tmp/ugsync.txt
"packer","packer","mygrp","test","solr_group"
[root@sandbox ~]# java -Dlogdir=/var/log/ranger/usersync -cp
/usr/hdp/current/ranger-usersync/dist/*:/usr/hdp/current/ranger-usersync/lib/*:/usr/hdp/current/ranger-usersync/conf
org.apache.ranger.unixusersync.process.FileSourceUserGroupBuilder /tmp/ugsync.txt
log4j: reset
attribute= "false".log4j: Threshold
="null".log4j: Level value
for root is [info].log4j: root level
set to INFOlog4j: Class name:
[org.apache.log4j.DailyRollingFileAppender]log4j: Setting
property [file] to [/var/log/ranger/usersync/usersync.log].log4j: Setting
property [datePattern] to ['.'yyyy-MM-dd].log4j: Parsing
layout of class: "org.apache.log4j.PatternLayout"log4j: Setting
property [conversionPattern] to [%d{dd MMM yyyy HH:mm:ss} %5p %c{1} [%t] -
%m%n].log4j: setFile
called: /var/log/ranger/usersync/usersync.log, truelog4j: setFile endedlog4j: Appender
[logFile] to be rolled at midnight.log4j: Adding
appender named [logFile] to category [root].log4j:
/var/log/ranger/usersync/usersync.log ->
/var/log/ranger/usersync/usersync.log.2016-04-04log4j: setFile
called: /var/log/ranger/usersync/usersync.log, truelog4j: setFile ended
[root@sandbox ~]# cd
/var/log/ranger/usersync
... View more
Labels:
05-23-2016
11:29 AM
1 Kudo
1) Using HDFS DFS -ls command I see /apps/hive with permissions 777 2) Modifying permissions on /apps/hive to 700 by using HDFS DFS -chmod command
3) Now going back to Ranger and modifying permissions to HDFS policy to add users to have access to path /apps/hive/warehouse. Ranger will no longer sync with HDFS
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Ranger
05-18-2016
12:14 PM
1 Kudo
Need to know if it is possible to use Ambari to create and maintain principals and keytabs for a third party application whose services are not managed by Ambari.
... View more
Labels:
- Labels:
-
Apache Ambari
05-12-2016
09:58 AM
1 Kudo
What are the configuration changes needed in YARN Capacity Scheduler to satisfy both below 1. prevent a user from killing another users job 2. all users should be able to view job info in the RM UI for the jobs that don't match their id Customer has set yarn.acl.enable to true and yarn.admin.acl to yarn user, after that 1 is working as expected but 2 is not working
... View more
Labels:
- Labels:
-
Apache YARN
03-10-2016
09:08 AM
3 Kudos
I have found few answers , hope i could find the one's i was unable to : i. Kerberos - Yes Supported
Hortonworks uses Kerberos for authentication of users and resources within a Hadoop cluster. HDP also includes Ambari, which simplifies Kerberos setup, configuration, and maintenance. iii. SAML 2.0 ? - Yes v. XACML ? - Not Supported
a different access control mechanism is used by Apache Ranger , which is most suitable
for Hadoop Ecosystem ix. X.509 Digital Certificate based authentication ? Yes x. Multi-factor authentication for public cloud interfaces ? No ************************************* iv. WS-Security ? vi. Oauth 2.0 ?
vii. Oauth UMA ?
viii. OIDC ? ii. WS-Federation ?
... View more
03-01-2016
07:54 AM
1 Kudo
@vpoornalingam Thank you ! This helps
... View more
02-29-2016
08:38 AM
1 Kudo
@Neeraj Sabharwal Thank you Any documentation on docs.hortonworks.com , that i can refer to ?
... View more
02-29-2016
08:33 AM
2 Kudos
Labels:
- Labels:
-
Apache Ambari
-
Apache Spark
- « Previous
-
- 1
- 2
- Next »