Member since
01-27-2016
27
Posts
25
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
763 | 08-25-2016 08:52 AM | |
2546 | 02-16-2016 07:13 AM |
09-20-2016
07:47 AM
@Berry Österlund Please check https://issues.apache.org/jira/browse/RANGER-798 , the patch on this JIRA should be resolving this issue.
... View more
08-25-2016
02:50 PM
@Rahul Buragohain This has been incorporated in the next official release. Below is the Apache JIRA details: https://issues.apache.org/jira/browse/RANGER-803
... View more
08-25-2016
09:02 AM
@Mourad Chahri use the following commands to create the 'rangerdba'@'manager' user CREATE USER 'rangerdba'@'manager' IDENTIFIED BY 'rangerdba';
GRANT ALL PRIVILEGES ON *.* TO 'rangerdba'@'manager';
... View more
08-25-2016
08:52 AM
2 Kudos
@Rahul Buragohain Ranger (on HDP 2.3.4 which has Apache Ranger 0.5.0) not supporting multiple ldap/OU sync at this point of time. Please refer to below Apache Documentation: https://cwiki.apache.org/confluence/display/RANGER/Multiple+OU+Ldap+Search+support+for+UserSync However we can sync up with one domain controller with Ranger and for others created the local users in ranger admin, still we get the next release. You may refer to an article on HCC by Hortonworker: https://community.hortonworks.com/articles/36651/how-to-sync-up-multiple-domain-controllers-from-ad.html
... View more
08-17-2016
10:59 AM
@Sindhu "hive.server2.authentication" is set to LDAP and hive.metastore.sasl.enabled=false . Though we see these error. Thanks
... View more
08-17-2016
05:31 AM
I'm seeing this message repeated over and over in the log. It doesn't seem to be causing an issue so can this log entry be disabled? 2016-08-12 14:54:37,199 ERROR server.TThreadPoolServer (TThreadPoolServer.java:run(296)) - Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TSaslTransportException: No data or no sasl data in the stream
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TSaslTransportException: No data or no sasl data in the stream
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:328)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
... View more
Labels:
- Labels:
-
Apache Hive
07-20-2016
09:31 AM
2 Kudos
Few questions on Ambari for LDAP or Active Directory Authentication: 1. When users are synced into Ambari , are the passwords also stored in the Ambari's local DB along with the usernames 2. When a user logs into Ambari , is there a way for the user to change his password ?
3. When we create a user in AD , we set the property that "user must change password at next logon" , however after the ldpa-sync, the user cannot login into ambari . what could be the problem ? Also, when we go back to AD and untick this option (" user must change password at next logon") , the user is now able to login into ambari ? Any pointers would help Thanks
... View more
Labels:
- Labels:
-
Apache Ambari
06-22-2016
01:31 PM
1 Kudo
Documentation
referred:
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_installing_manually_book/content/configure-ranger_policy_admin_ha.html Environment
Information:
[ hivenv ] >> Cluster name hivenv-ambari-server.hwxblr.com >> 10.0.1.26 >> Ambari Server hn1.hwxblr.com >> 10.0.1.21 >> Existing Ranger Admin hn3.hwxblr.com >> 10.0.1.23 >> Load Balancer hn2.hwxblr.com >> 10.0.1.25 >> Additional Ranger Admin
# hadoop version
Hadoop 2.7.1.2.3.2.0-2950
Subversion git@github.com:hortonworks/hadoop.git -r5cc60e0003e33aa98205f18bccaeaf36cb193cc
Compiled by jenkins on 2015-09-30T18:08Z
Compiled with protoc 2.5.0
From source with checksum 69a3bf8c667267c2c252a54fbbf23d
This command was run using /usr/hdp/2.3.2.0-2950/hadoop/lib/hadoop-common-2.7.1.2.3.2.0-2950.jar
# uname -a
Linux hn1.hwxblr.com 3.10.0-327.13.1.el7.x86_64 #1 SMP Thu Mar 31 16:04:38 UTC 2016
x86_64 x86_64 x86_64 GNU/Linux
# cat /etc/redhat-release
CentOS release 6.7 (Final)
Steps to be followed:
Install the Ranger Admin component on the hosts you wish to use – hn1.hwxblr.com. For information about installing Ranger over Ambari,
see
Installing
Ranger Over Ambari 2.0
.
Configure a load balancer
to balance the loads among the various Ranger Admin instances and take
note of the load balancer URL.
Step 1: Before Installing HAProxy on the server we need to
install epel repository on our system depending on our operating system version
using following command. CentOS/RHEL 5 , 32 bit:
# rpm -Uvh
http://dl.fedoraproject.org/pub/epel/5/i386/epel-release-5-4.noarch.rpm
CentOS/RHEL 5 , 64 bit:
# rpm -Uvh
http://dl.fedoraproject.org/pub/epel/5/x86_64/epel-release-5-4.noarch.rpm
CentOS/RHEL 6 , 32 bit:
# rpm -Uvh
http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
CentOS/RHEL 6 , 64 bit:
# rpm -Uvh
http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
Step 2: Install HAProxy using Yum. [root@hn3 ~]# yum install haproxy
Step 3: Now we will configure HAProxy. [root@hn3 ~]# vi /etc/haproxy/haproxy.cfg
Please refer for brief documentation before editing this
file :
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Load_Balancer_Administration/ch-haproxy-setup-VSA.html [root@hn3 ~]# cat haproxy.cfg
#---------------------------------------------------------------------
# Example configuration for a possible web
application. See the
# full configuration options online.
#
#
http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
#
to have these messages end up in /var/log/haproxy.log you will
#
need to:
#
#
1) configure syslog to accept network log events. This is done
# by adding the '-r' option to
the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
#
2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the
following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
#
turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and
'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option
dontlognull
option http-server-close
option forwardfor except
127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 30000
#---------------------------------------------------------------------
# round robin balancing between Ranger HA
#---------------------------------------------------------------------
frontend
haproxy
bind 10.0.1.28:6080
reqadd X-Forwarded-Proto:\ http
default_backend ranger_ha
backend ranger_ha 10.0.1.28:6080
balance roundrobin
mode http
stats enable
stats hide-version
stats uri /stats
stats realm Haproxy\ Statistics
stats auth haproxy:redhat
option httpchk
option httpclose
option forwardfor
cookie LB insert
server hn1.hwxblr.com 10.0.1.27:6080 cookie A check
server hn2.hwxblr.com 10.0.1.29:6080 check
[root@hn3 ~]#
Step 5: Start the HAProxy service [root@hn3 ~]# service haproxy start
Step 6: To make the HAProxy service persist through reboots [root@hn3 ~]# chkconfig haproxy on
Update the Policy Manager external URL in all Ranger Admin
clients (Ranger UserSync and Ranger plug-ins) to point to the load
balancer URL.
Ambari >> Ranger
>> Configs >>Ranger Settings >> External
URL (policymgr_external_url ) :
http://hn3.hwxblr.com:6080
Enable Ranger Admin HA Wizard >> On Ambari >> Ranger >> Service Actions >> Enable Ranger HA
URL to load balancer: http://10.0.1.23:6080
Select additional Ranger Admin as hn2.hwxblr.com Install Additional Ranger Admin
Access Load Balancer URL: http://10.0.1.23:6080 , you should be able to access Ranger policies.
... View more
- Find more articles tagged with:
- configuration
- HDFS
- High-Availability
- how-to-tutorial
- How-ToTutorial
- Ranger
- Security
Labels:
06-21-2016
07:20 AM
If we turn on encryption, should we turn off compression? Are the data nodes smart enough to realize that there's no point trying to compress encrypted blocks?
... View more
Labels:
- Labels:
-
Cloudera Navigator Encrypt
06-14-2016
08:38 AM
@dsharma Thank you
... View more
06-14-2016
05:19 AM
Windows' UID and GID values are 9-digit UID / GID values? Will it function properly when Ranger's authorization policies are dependent on it?
Some group names contain special characters. Are they going to cause any technical difficulties to the cluster, especially to Ranger?
... View more
Labels:
- Labels:
-
Apache Ranger
06-01-2016
12:11 PM
2 Kudos
Use Case : 1 . ADDING a member to another group and being able to manage them internally without having to deal with outside or additional products. 2 . To be able to easily determine what members reside in what groups instead of having to scroll down page after page to see what members are in what groups especially when you have hundreds of users to keep track of. 3 . To easily administer various groups without having the hassle of creating more and more Active Directory/LDAP associations and having to submit change control requests to other departments for something we should be able to administer on our own. Ranger User Sync Process supports reading users and group information from one of the following sources:
Unix
Text file - CSV or JSON format LADP/AD CSV Format : If the filename does
not end with .json, each line in the file will be treated as a delimiter
separated fields of the following format. Default delimiter is a comma; this
can be changed using configuration shown above. user-1,group-1,group-2,group-3
user-2,group-x,group-y,group-z
CSV File Format
e.g. UserGroupSyncFile.txt
"user21","group20","group218","group26","group27","group262","group242","group219","group23"
"user22","group20","group218","group26"
"user23","user24","group20","group218" To run it as Command
Line tool: java
-Dlogdir=/var/log/ranger/usersync -cp
/usr/hdp/current/ranger-usersync/dist/*:/usr/hdp/current/ranger-usersync/lib/*:/usr/hdp/current/ranger-usersync/conf
org.apache.ranger.unixusersync.process.FileSourceUserGroupBuilder
/tmp/UserGroupSyncFile.txt Steps : Create a group called solr_group and add certain users (imported from LDAP) into that group that we know will use SOLR. All the users are associated with the groups defined through LDAP and nothing else but we want to create additional groups and link users to those groups on Ranger. 1. Cluster with Ranger and configure with LDAP users. Here it is "packer". 2. Create a internal group on Ranger UI. Here it is "solr_group". 3. Edit an external LDAP user to add it to the group that we created. 4. Unable to edit the group field(greyed out) on Ranger UI for that LDAP user. [root@sandbox ~]# vi /tmp/ugsync.txt
[root@sandbox ~]# cat /tmp/ugsync.txt
"packer","packer","mygrp","test","solr_group"
[root@sandbox ~]# java -Dlogdir=/var/log/ranger/usersync -cp
/usr/hdp/current/ranger-usersync/dist/*:/usr/hdp/current/ranger-usersync/lib/*:/usr/hdp/current/ranger-usersync/conf
org.apache.ranger.unixusersync.process.FileSourceUserGroupBuilder /tmp/ugsync.txt
log4j: reset
attribute= "false".log4j: Threshold
="null".log4j: Level value
for root is [info].log4j: root level
set to INFOlog4j: Class name:
[org.apache.log4j.DailyRollingFileAppender]log4j: Setting
property [file] to [/var/log/ranger/usersync/usersync.log].log4j: Setting
property [datePattern] to ['.'yyyy-MM-dd].log4j: Parsing
layout of class: "org.apache.log4j.PatternLayout"log4j: Setting
property [conversionPattern] to [%d{dd MMM yyyy HH:mm:ss} %5p %c{1} [%t] -
%m%n].log4j: setFile
called: /var/log/ranger/usersync/usersync.log, truelog4j: setFile endedlog4j: Appender
[logFile] to be rolled at midnight.log4j: Adding
appender named [logFile] to category [root].log4j:
/var/log/ranger/usersync/usersync.log ->
/var/log/ranger/usersync/usersync.log.2016-04-04log4j: setFile
called: /var/log/ranger/usersync/usersync.log, truelog4j: setFile ended
[root@sandbox ~]# cd
/var/log/ranger/usersync
... View more
- Find more articles tagged with:
- HDFS
- how-to-tutorial
- Issue Resolution
- issue-resolution
- process-groups
- Ranger
- ranger-usersync
- Security
- user-groups
Labels:
05-23-2016
11:29 AM
1 Kudo
1) Using HDFS DFS -ls command I see /apps/hive with permissions 777 2) Modifying permissions on /apps/hive to 700 by using HDFS DFS -chmod command
3) Now going back to Ranger and modifying permissions to HDFS policy to add users to have access to path /apps/hive/warehouse. Ranger will no longer sync with HDFS
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Ranger
05-18-2016
12:14 PM
1 Kudo
Need to know if it is possible to use Ambari to create and maintain principals and keytabs for a third party application whose services are not managed by Ambari.
... View more
Labels:
- Labels:
-
Apache Ambari
05-12-2016
09:58 AM
1 Kudo
What are the configuration changes needed in YARN Capacity Scheduler to satisfy both below 1. prevent a user from killing another users job 2. all users should be able to view job info in the RM UI for the jobs that don't match their id Customer has set yarn.acl.enable to true and yarn.admin.acl to yarn user, after that 1 is working as expected but 2 is not working
... View more
Labels:
- Labels:
-
Apache YARN
03-10-2016
09:08 AM
3 Kudos
I have found few answers , hope i could find the one's i was unable to : i. Kerberos - Yes Supported
Hortonworks uses Kerberos for authentication of users and resources within a Hadoop cluster. HDP also includes Ambari, which simplifies Kerberos setup, configuration, and maintenance. iii. SAML 2.0 ? - Yes v. XACML ? - Not Supported
a different access control mechanism is used by Apache Ranger , which is most suitable
for Hadoop Ecosystem ix. X.509 Digital Certificate based authentication ? Yes x. Multi-factor authentication for public cloud interfaces ? No ************************************* iv. WS-Security ? vi. Oauth 2.0 ?
vii. Oauth UMA ?
viii. OIDC ? ii. WS-Federation ?
... View more
03-01-2016
07:54 AM
1 Kudo
@vpoornalingam Thank you ! This helps
... View more
02-29-2016
08:38 AM
1 Kudo
@Neeraj Sabharwal Thank you Any documentation on docs.hortonworks.com , that i can refer to ?
... View more
02-29-2016
08:33 AM
2 Kudos
Labels:
- Labels:
-
Apache Ambari
-
Apache Spark
02-16-2016
07:13 AM
5 Kudos
Hi @Revathy Mourouguessane , you can also try to setup Port Forwarding from VirtualBox , Please follow :
Click on VirtualBox -> Preferences -> Network (on the left menu bar), select your NAT network (LocalNat) and click the screwdriver icon to the right to edit like we did before when confirming the networking settings for the NAT network. This will bring you back to the NAT networking details window, click on the ‘port forwarding’ button. You’ll need to setup a rule to forward a port on your local machine to the port of your virtual machine.
Name: Sandbox Protocol: TCP Host IP: <blank> Host Port: 8080 Guest IP: 10.0.2.15 ( as in your case ) Guest Port: 8080 Let me know if it helped you resolve the issue .
... View more