Member since
10-03-2017
82
Posts
2
Kudos Received
0
Solutions
05-25-2018
05:36 PM
I am doing manual installation and not using Ambari. I installed Ranger policymanager, usersync and solr audit services on node in my cluster, let's say node-1. My namenode is running on another node, let's say node-2. I ran script "/usr/hdp/2.6.4.0-91/ranger-hdfs-plugin/enable-hdfs-plugin.sh" on node-2. Then logged in Ranger ui and created HDFS service in HDFS-plugin as in attachment pic1.jpg . I am getting successful connection with test connection pic2.png . But I am not able to see anything in audit portal for plugins pic3.jpg . Ideally, I should be able to see an audit log there with http response 200. Can anyone help to point out what am I doing wrong here?
... View more
Labels:
05-17-2018
07:42 PM
@Mohsin Aqee I am also facing similar issue. PutHDFS processor is writing empty files. My HDFS is running in Kubernetes cluster and namenode and datanodes running on different pods in cluster. I am able to connect to namenode with external hostname address for namnenode with this -> hdfs://<Kubernetes-ip>:9000 in core-site.xml. PutHDFS processor not giving me any error if I have this property dfs.client.use.datanode.hostname=true but if it is false then I would get IOException as below: Caused by: org.apache.hadoop.ipc.RemoteException: File /.test.txt could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and 2 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1547)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3107)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3031)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:724)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043) I think this means its not able to connect to internal hostname in cluster,. Hence I gave external address to datanode port in hdfs-site.xml but still didn't work. I have knox gateway in my cluster too. Do you know if I can write files with webhdfs via knox using Nifi?
... View more
05-14-2018
03:54 PM
I am looking to write few files in HDFS via knox. There is one solution I came across i.e. nifi to create dataflow to write files to hdfs. But as per my requirement, I have to write into HDFS via knox. Is there any suggestion how would i do this? And if it is not possible to write into HDFS in Nifi via Knox then is there any alternative solution?
... View more
Labels:
04-16-2018
10:08 PM
04-12-2018
04:17 PM
Are these Java patches updating tables in ranger db?
... View more
Labels:
03-15-2018
03:08 PM
@Vipin Rathor Thank you so much for the explanation! I have a follow up question too. If I use 'AclsAuthz' provider then I won't be able to do service level authorization in Ranger by creating policies..is this correct? I think because in that case service level authorization will be enforced what I define in knox topology like below under 'AclsAuthz' provider. <param>
<name>{serviceName}.acl</name>
<value>username[,*|username...];group[,*|group...];ipaddr[,*|ipaddr...]</value>
</param>
... View more
03-13-2018
07:12 PM
@Deepak Sharma
@vsuvagia If you look at the gateway.log I attached in my previous post, the main error is -> ERROR knox.RangerPDPKnoxFilter (RangerPDPKnoxFilter.java:init(73)) - Error while setting UGI for Knox Plugin... I tried to look for this error and I found this post -> https://community.hortonworks.com/questions/97518/help-ad-integration-with-knox.html In this post, the resolution is to change Authorization provider in admin topology from XAsecurePDPKnox to AclsAuthz. I tried that too and I am getting a successfull connection by changing this but I read somewhere that to enable Ranger plugin, authorization provider has to be XAsecurePDPKnox. Please suggest.
... View more
03-13-2018
02:46 PM
@vsuvagia Yes, I did import the knox cert to Ranger truststore. I am not getting any SSL error.
... View more
03-12-2018
03:25 PM
1 Kudo
I am able to create with the below rest call: curl -ivu admin:admin -H "Content-Type: application/json" -d '{"name":"hdfs-test-service","description":"testing","repositoryType":"hdfs","config":"{\"username\":\"admin\",\"password\":\"admin\",\"fs.default.name\":\"hdfs://<namenode-host>:9000\",\"hadoop.security.authorization\":true,\"hadoop.security.authentication\":\"simple\"}","isActive":true}' -X POST http://<ranger-host>:6080/service/public/api/repository
... View more
03-12-2018
02:45 PM
@Deepak Sharma @vsuvagia yes, I am using demo LDAP comes with knox. I changed the password from admin to admin-password. Now I am getting different error logs: Even changing the localhost to fqdn giving me same error logs. ranger-admin.log 2018-03-12 14:34:55,969 [http-bio-6080-exec-9] WARN org.apache.ranger.biz.ServiceMgr (ServiceMgr.java:355) - getFilesInDirectory('ranger-plugins/knox'): adding /usr/hdp/2.6.4.0-91/ranger-admin/ews/webapp/WEB-INF/classes/ranger-plugins/knox/ranger-knox-plugin-0.7.0.2.6.4.0-91.jar
2018-03-12 14:34:55,970 [http-bio-6080-exec-9] WARN org.apache.ranger.biz.ServiceMgr (ServiceMgr.java:355) - getFilesInDirectory('ranger-plugins/knox'): adding /usr/hdp/2.6.4.0-91/ranger-admin/ews/webapp/WEB-INF/classes/ranger-plugins/knox/jackson-core-asl-1.9.13.jar
2018-03-12 14:34:55,970 [http-bio-6080-exec-9] WARN org.apache.ranger.biz.ServiceMgr (ServiceMgr.java:355) - getFilesInDirectory('ranger-plugins/knox'): adding /usr/hdp/2.6.4.0-91/ranger-admin/ews/webapp/WEB-INF/classes/ranger-plugins/knox/commons-collections-3.2.2.jar
2018-03-12 14:34:55,970 [http-bio-6080-exec-9] WARN org.apache.ranger.biz.ServiceMgr (ServiceMgr.java:355) - getFilesInDirectory('ranger-plugins/knox'): adding /usr/hdp/2.6.4.0-91/ranger-admin/ews/webapp/WEB-INF/classes/ranger-plugins/knox/jackson-mapper-asl-1.9.13.jar
2018-03-12 14:34:55,970 [http-bio-6080-exec-9] WARN org.apache.ranger.biz.ServiceMgr (ServiceMgr.java:355) - getFilesInDirectory('ranger-plugins/knox'): adding /usr/hdp/2.6.4.0-91/ranger-admin/ews/webapp/WEB-INF/classes/ranger-plugins/knox/commons-lang-2.6.jar
2018-03-12 14:34:56,013 [timed-executor-pool-0] ERROR org.apache.ranger.plugin.util.PasswordUtils (PasswordUtils.java:130) - Unable to decrypt password due to error
javax.crypto.IllegalBlockSizeException: Input length must be multiple of 8 when decrypting with padded cipher
at com.sun.crypto.provider.CipherCore.doFinal(CipherCore.java:936)
at com.sun.crypto.provider.CipherCore.doFinal(CipherCore.java:847)
at com.sun.crypto.provider.PBES1Core.doFinal(PBES1Core.java:416)
at com.sun.crypto.provider.PBEWithMD5AndDESCipher.engineDoFinal(PBEWithMD5AndDESCipher.java:316)
at javax.crypto.Cipher.doFinal(Cipher.java:2165)
at org.apache.ranger.plugin.util.PasswordUtils.decryptPassword(PasswordUtils.java:115)
at org.apache.ranger.services.knox.client.KnoxClient.getTopologyList(KnoxClient.java:79)
at org.apache.ranger.services.knox.client.KnoxClient$2.call(KnoxClient.java:406)
at org.apache.ranger.services.knox.client.KnoxClient$2.call(KnoxClient.java:402)
at org.apache.ranger.services.knox.client.KnoxClient.timedTask(KnoxClient.java:431)
at org.apache.ranger.services.knox.client.KnoxClient.getKnoxResources(KnoxClient.java:410)
at org.apache.ranger.services.knox.client.KnoxClient.connectionTest(KnoxClient.java:315)
at org.apache.ranger.services.knox.client.KnoxResourceMgr.validateConfig(KnoxResourceMgr.java:43)
at org.apache.ranger.services.knox.RangerServiceKnox.validateConfig(RangerServiceKnox.java:56)
at org.apache.ranger.biz.ServiceMgr$ValidateCallable.actualCall(ServiceMgr.java:560)
at org.apache.ranger.biz.ServiceMgr$ValidateCallable.actualCall(ServiceMgr.java:547)
at org.apache.ranger.biz.ServiceMgr$TimedCallable.call(ServiceMgr.java:508)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2018-03-12 14:34:56,015 [timed-executor-pool-0] INFO apache.ranger.services.knox.client.KnoxClient (KnoxClient.java:81) - Password decryption failed; trying knox connection with received password string
2018-03-12 14:34:57,918 [timed-executor-pool-0] ERROR apache.ranger.services.knox.client.KnoxClient (KnoxClient.java:131) - Got invalid REST response from: https://localhost:8443/gateway/admin/api/v1/topologies, responseStatus: 500 gateway-audit.log 18/03/12 14:34:56 ||3e3df102-de59-4c19-9779-cdd4522181bb|audit|127.0.0.1|KNOX||||access|uri|/gateway/admin/api/v1/topologies|unavailable|Request method: GET
18/03/12 14:34:56 ||3e3df102-de59-4c19-9779-cdd4522181bb|audit|127.0.0.1|KNOX|admin|||authentication|uri|/gateway/admin/api/v1/topologies|success|
18/03/12 14:34:56 ||3e3df102-de59-4c19-9779-cdd4522181bb|audit|127.0.0.1|KNOX|admin|||authentication|uri|/gateway/admin/api/v1/topologies|success|Groups: []
18/03/12 14:34:57 ||3e3df102-de59-4c19-9779-cdd4522181bb|audit|127.0.0.1|KNOX|admin|||access|uri|/gateway/admin/api/v1/topologies|failure| gateway.log gateway-log.txt
... View more
03-11-2018
10:51 PM
@Deepak Sharma Thanks for replying. Yes, when I create knox service, it automatically creates a policy to allow access to admin. But it didn't show in Audit tab. I am attaching screen shots too. Also, this time I did the fresh intsallation of ranger, hdfs-plugin and then knox plugin. I find logs little different then previous. gateway-audit.log 18/03/11 22:57:46 ||bf9148a8-fe74-4a1a-b18f-9a300f0a062c|audit|127.0.0.1|KNOX||||access|uri|/gateway/admin/api/v1/topologies|unavailable|Request method: GET 18/03/11 22:57:46 ||bf9148a8-fe74-4a1a-b18f-9a300f0a062c|audit|127.0.0.1|KNOX||||authentication|principal|admin|failure|LDAP authentication failed. 18/03/11 22:57:46 ||bf9148a8-fe74-4a1a-b18f-9a300f0a062c|audit|127.0.0.1|KNOX||||access|uri|/gateway/admin/api/v1/topologies|success|Response status: 401 gateway.log 2018-03-11 22:57:46,092 INFO hadoop.gateway (KnoxLdapRealm.java:getUserDn(691)) - Computed userDn: uid=admin,ou=people,dc=hadoop,dc=apache,dc=org using dnTemplate for principal: admin 2018-03-11 22:57:46,118 INFO hadoop.gateway (KnoxLdapRealm.java:doGetAuthenticationInfo(203)) - Could not login: org.apache.shiro.authc.UsernamePasswordToken - admin, rememberMe=false (127.0.0.1) 2018-03-11 22:57:46,121 ERROR hadoop.gateway (KnoxLdapRealm.java:doGetAuthenticationInfo(205)) - Shiro unable to login: javax.naming.AuthenticationException: [LDAP: error code 49 - INVALID_CREDENTIALS: Bind failed: ERR_229 Cannot authenticate user uid=admin,ou=people,dc=hadoop,dc=apache,dc=org] ranger_admin.log 2018-03-11 22:57:45,291 [http-bio-6080-exec-9] WARN org.apache.ranger.biz.ServiceMgr (ServiceMgr.java:355) - getFilesInDirectory('ranger-plugins/knox'): adding /usr/hdp/2.6.4.0-91/ranger-admin/ews/webapp/WEB-INF/classes/ranger-plugins/knox/ranger-knox-plugin-0.7.0.2.6.4.0-91.jar 2018-03-11 22:57:45,291 [http-bio-6080-exec-9] WARN org.apache.ranger.biz.ServiceMgr (ServiceMgr.java:355) - getFilesInDirectory('ranger-plugins/knox'): adding /usr/hdp/2.6.4.0-91/ranger-admin/ews/webapp/WEB-INF/classes/ranger-plugins/knox/jackson-core-asl-1.9.13.jar 2018-03-11 22:57:45,292 [http-bio-6080-exec-9] WARN org.apache.ranger.biz.ServiceMgr (ServiceMgr.java:355) - getFilesInDirectory('ranger-plugins/knox'): adding /usr/hdp/2.6.4.0-91/ranger-admin/ews/webapp/WEB-INF/classes/ranger-plugins/knox/commons-collections-3.2.2.jar 2018-03-11 22:57:45,293 [http-bio-6080-exec-9] WARN org.apache.ranger.biz.ServiceMgr (ServiceMgr.java:355) - getFilesInDirectory('ranger-plugins/knox'): adding /usr/hdp/2.6.4.0-91/ranger-admin/ews/webapp/WEB-INF/classes/ranger-plugins/knox/jackson-mapper-asl-1.9.13.jar 2018-03-11 22:57:45,293 [http-bio-6080-exec-9] WARN org.apache.ranger.biz.ServiceMgr (ServiceMgr.java:355) - getFilesInDirectory('ranger-plugins/knox'): adding /usr/hdp/2.6.4.0-91/ranger-admin/ews/webapp/WEB-INF/classes/ranger-plugins/knox/commons-lang-2.6.jar 2018-03-11 22:57:46,142 [timed-executor-pool-0] ERROR apache.ranger.services.knox.client.KnoxClient (KnoxClient.java:131) - Got invalid REST response from: https://localhost:8443/gateway/admin/api/v1/topologies, responseStatus: 401 I installed the ranger and plugins referring this link -> https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_command-line-installation/content/ch_installing_ranger_chapter.html Please let me know if you need me to post any config files.
... View more
03-08-2018
11:52 PM
@Deepak Sharma i am using same username and password as I am using for HDFS service.
... View more
03-08-2018
08:08 PM
1 Kudo
I am trying to enable Ranger Knox plugin. I created a service called 'knoxdev' and test connection is successful. But still I am not able to see service 'knoxdev' in audit->Plugins tab. - The knox url tested in connection is:- https://localhost:8443/gateway/admin/api/v1/topologies - The authorization provider I am using in admin topology is AclsAuthz. If I change it to XAsecurePDPknox then I do not get get successful connection. Need to know what provider should be used. I have hdfs plugin enabled too and service created for same as 'hadoopdev'. I am able to see 'hadoodev' in audit->Plugins tab but not 'knoxdev'. Then I checked gateway.log and gateway-audit.log. I can see response code as 200. Then I checked ranger_admin.log, and there I find this error: ERROR org.apache.ranger.plugin.util.PasswordUtils (PasswordUtils.java:130) - Unable to decrypt password due to error
javax.crypto.IllegalBlockSizeException: Input length must be multiple of 8 when decrypting with padded cipher
at com.sun.crypto.provider.CipherCore.doFinal(CipherCore.java:936)
at com.sun.crypto.provider.CipherCore.doFinal(CipherCore.java:847)
at com.sun.crypto.provider.PBES1Core.doFinal(PBES1Core.java:416)
at com.sun.crypto.provider.PBEWithMD5AndDESCipher.engineDoFinal(PBEWithMD5AndDESCipher.java:316)
at javax.crypto.Cipher.doFinal(Cipher.java:2165)
at org.apache.ranger.plugin.util.PasswordUtils.decryptPassword(PasswordUtils.java:115)
at org.apache.ranger.services.knox.client.KnoxClient.getTopologyList(KnoxClient.java:79)
at org.apache.ranger.services.knox.client.KnoxClient$2.call(KnoxClient.java:406)
at org.apache.ranger.services.knox.client.KnoxClient$2.call(KnoxClient.java:402)
at org.apache.ranger.services.knox.client.KnoxClient.timedTask(KnoxClient.java:431)
at org.apache.ranger.services.knox.client.KnoxClient.getKnoxResources(KnoxClient.java:410)
at org.apache.ranger.services.knox.client.KnoxClient.connectionTest(KnoxClient.java:315)
at org.apache.ranger.services.knox.client.KnoxResourceMgr.validateConfig(KnoxResourceMgr.java:43)
at org.apache.ranger.services.knox.RangerServiceKnox.validateConfig(RangerServiceKnox.java:56)
at org.apache.ranger.biz.ServiceMgr$ValidateCallable.actualCall(ServiceMgr.java:560)
at org.apache.ranger.biz.ServiceMgr$ValidateCallable.actualCall(ServiceMgr.java:547)
at org.apache.ranger.biz.ServiceMgr$TimedCallable.call(ServiceMgr.java:508)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2018-03-08 18:23:16,315 [timed-executor-pool-0] INFO apache.ranger.services.knox.client.KnoxClient (KnoxClient.java:81) - Password decryption failed; trying knox connection with received password string
Can anyone help?
... View more
Labels:
03-07-2018
07:10 PM
@Deepak Sharma Yes, the cluster is not Kerberos secured. I have validated the username and corrected the password I was giving while creating service. Now, I am not able to see any error in gateway.log. But gateway-audit.log and ranger-admin.log still showing same error. Can you suggest?
... View more
03-06-2018
10:08 PM
I am trying to create a service in hdfs-plugin. I am refering this link. I have tried below curl command: curl -H "Content-Type: application/json" -X POST -d '{"configs": {"password": "*****","username": "admin"},"description":"hdfsservice","isEnabled": true,"name": "hadoopdev","type": "test","version": 1}' http://localhost:6080/service/public/v2/api/service When I ran this it didn't give any response, not even any error. Can someone help me to know if this is the correct curl or not?
... View more
Labels:
03-05-2018
08:18 PM
I am doing Ranger comman-line installation (non-ambari). I need to set the admin username and password other than the default. Which config file needs to be modified and what are the property names for the same?
... View more
Labels:
02-28-2018
07:27 PM
I am setting up Ranger authorization on my Hadoop cluster (not Kerberos secured). I have Knox running already. There is a pre-requisite for Ranger to configure LDAP/AD group level authorization. Please help me understand why it is important and needed.
... View more
Labels:
02-27-2018
06:35 PM
Thanks @spolavarapu. This worked for me.
... View more
02-26-2018
12:47 AM
This is the knoxurl I am giving while creating service in knox plugin: https://localhost:8443/gateway/admin/api/v1/topologies I have also added group info in topologies/admin.xml as per suggestion here (https://community.hortonworks.com/articles/38348/ranger-is-not-allowing-access-to-knox-resources-wh.html) <param>
<name>main.ldapRealm.authorizationEnabled</name>
<value>true</value>
</param>
<param>
<name>main.ldapRealm.groupSearchBase</name>
<value>ou=groups,dc=hadoop,dc=apache,dc=org</value>
</param>
<param>
<name>main.ldapRealm.groupObjectClass</name>
<value>group</value>
</param>
<param>
<name>main.ldapRealm.groupIdAttribute</name>
<value>cn</value>
</param Below are the log details: ranger-admin.log 2018-02-26 00:34:47,535 [timed-executor-pool-0] INFO apache.ranger.services.knox.client.KnoxClient (KnoxClient.java:81)
- Password decryption failed; trying knox connection with received password string
2018-02-26 00:34:47,632 [timed-executor-pool-0] ERROR apache.ranger.services.knox.client.KnoxClient (KnoxClient.java:131) - Got invalid REST response from: https://localhost:8443/gateway/admin/api/v1/topologies, responseStatus: 403 gateway.log 2018-02-26 00:34:47,614 INFO hadoop.gateway (KnoxLdapRealm.java:getUserDn(691)) - Computed userDn: uid=admin,ou=people,dc=hadoop,dc=apache,dc=org using dnTemplate for principal: admin
2018-02-26 00:34:47,630 ERROR hadoop.gateway (KnoxLdapRealm.java:getRoles(246)) - Failed to get system ldap connection: javax.naming.AuthenticationException: [LDAP: error code 49 - INVALID_CREDENTIALS: Bind failed: ERR_229 Cannot authenticate user ] gateway-audit.log 18/02/26 00:24:44 ||65cc6da4-9fa6-4e6d-8b69-b99f5d9acacb|audit|127.0.0.1|KNOX|admin|||authentication|uri|/gateway/admin/api/v1/topologies|success|Groups: [] 18/02/26 00:24:44 ||65cc6da4-9fa6-4e6d-8b69-b99f5d9acacb|audit|127.0.0.1|KNOX|admin|||access|uri|/gateway/admin/api/v1/topologies|success|Response status: 403
... View more
Labels:
02-23-2018
05:23 PM
@spolavarapu Thanks for the clarification. Can you please tell me how to disable default incremental sync. I am doing manual installation (not with Ambari). I am not sure which property I need to set for disabling incremental sync.
... View more
02-21-2018
08:51 PM
I know Ranger can sync the users and groups from LDAP/AD or Unix but can it sync it from sql database?
... View more
Labels:
02-21-2018
08:42 PM
I want to integrate Ranger Usersync service with Highly available LDAP. Is this possible?
... View more
Labels:
02-20-2018
05:30 AM
1. admin-install.txt 2. usersync-install.txt 3. ldapsearch -x -b "dc=hadoop,dc=apache,dc=org" ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)
... View more
02-20-2018
01:59 AM
I am configuring ldap in usersync install.properties file, attached here install.txt. My user ldif file is attached here: users.txt I am not able to see any errors in usersync logs: 2018 01:34:38 INFO UnixAuthenticationService [main] - Starting User Sync Service!
20 Feb 2018 01:34:38 INFO AbstractMapper [UnixUserSyncThread] - Initializing for ranger.usersync.mapping.username.regex
20 Feb 2018 01:34:38 INFO AbstractMapper [UnixUserSyncThread] - Initializing for ranger.usersync.mapping.groupname.regex
20 Feb 2018 01:34:38 INFO LdapDeltaUserGroupBuilder [UnixUserSyncThread] - LdapDeltaUserGroupBuilder created
20 Feb 2018 01:34:38 INFO UserGroupSyncConfig [UnixUserSyncThread] - Sleep Time Between Cycle can not be lower than [3600000] millisec. resetting to min value.
20 Feb 2018 01:34:38 INFO UserGroupSync [UnixUserSyncThread] - initializing sink: org.apache.ranger.ldapusersync.process.LdapPolicyMgrUserGroupBuilder
20 Feb 2018 01:34:39 INFO AbstractMapper [UnixUserSyncThread] - Initializing for ranger.usersync.mapping.username.regex
20 Feb 2018 01:34:39 INFO AbstractMapper [UnixUserSyncThread] - Initializing for ranger.usersync.mapping.groupname.regex
20 Feb 2018 01:34:39 INFO LdapDeltaUserGroupBuilder [UnixUserSyncThread] - LdapDeltaUserGroupBuilder created
20 Feb 2018 01:34:39 INFO UserGroupSync [UnixUserSyncThread] - initializing source: org.apache.ranger.ldapusersync.process.LdapDeltaUserGroupBuilder
20 Feb 2018 01:34:39 INFO LdapDeltaUserGroupBuilder [UnixUserSyncThread] - LdapDeltaUserGroupBuilder initialization started
20 Feb 2018 01:34:39 INFO LdapDeltaUserGroupBuilder [UnixUserSyncThread] - LdapDeltaUserGroupBuilder initialization completed with -- ldapUrl: ldap://localhost:33389, ldapBindDn: uid=admin,ou=people,dc=hadoop,dc=apache,dc=org, ldapBindPassword: ***** , ldapAuthenticationMechanism: simple, searchBase: dc=hadoop,dc=apache,dc=org, userSearchBase: [ou=people,dc=hadoop,dc=apache,dc=org], userSearchScope: 2, userObjectClass: person, userSearchFilter: (uid=*), extendedUserSearchFilter: null, userNameAttribute: uid, userSearchAttributes: [uid, uSNChanged, modifytimestamp], userGroupNameAttributeSet: null, pagedResultsEnabled: true, pagedResultsSize: 500, groupSearchEnabled: true, groupSearchBase: [ou=groups,dc=hadoop,dc=apache,dc=org], groupSearchScope: 2, groupObjectClass: groupofnames, groupSearchFilter: (cn=*), extendedGroupSearchFilter: (&null(|(member={0})(member={1}))), extendedAllGroupsSearchFilter: null, groupMemberAttributeName: member, groupNameAttribute: cn, groupSearchAttributes: [uSNChanged, member, cn, modifytimestamp], groupUserMapSyncEnabled: true, groupSearchFirstEnabled: false, userSearchEnabled: false, ldapReferral: ignore
20 Feb 2018 01:34:39 INFO UserGroupSync [UnixUserSyncThread] - Begin: initial load of user/group from source==>sink
20 Feb 2018 01:34:39 INFO LdapDeltaUserGroupBuilder [UnixUserSyncThread] - LdapDeltaUserGroupBuilder updateSink started
20 Feb 2018 01:34:39 INFO LdapDeltaUserGroupBuilder [UnixUserSyncThread] - Performing user search first
20 Feb 2018 01:34:39 INFO LdapDeltaUserGroupBuilder [UnixUserSyncThread] - extendedUserSearchFilter = (&(objectclass=person)(|(uSNChanged>=0)(modifyTimestamp>=19700101120000Z))(uid=*))
20 Feb 2018 01:34:39 INFO LdapDeltaUserGroupBuilder [UnixUserSyncThread] - LdapDeltaUserGroupBuilder.getUsers() completed with user count: 0
20 Feb 2018 01:34:39 INFO LdapDeltaUserGroupBuilder [UnixUserSyncThread] - extendedAllGroupsSearchFilter = (&(objectclass=groupofnames)(cn=*)(|(uSNChanged>=0)(modifyTimestamp>=19700101120000Z)))
20 Feb 2018 01:34:39 INFO LdapDeltaUserGroupBuilder [UnixUserSyncThread] - LdapDeltaUserGroupBuilder.getGroups() completed with group count: 0
20 Feb 2018 01:34:39 INFO UserGroupSync [UnixUserSyncThread] - End: initial load of user/group from source==>sink
20 Feb 2018 01:34:39 INFO UserGroupSync [UnixUserSyncThread] - Done initializing user/group source and sink
20 Feb 2018 01:34:43 INFO UnixAuthenticationService [main] - Enabling Unix Auth Service!
20 Feb 2018 01:34:43 INFO UnixAuthenticationService [main] - Enabling Protocol: [SSLv2Hello]
20 Feb 2018 01:34:43 INFO UnixAuthenticationService [main] - Enabling Protocol: [TLSv1]
20 Feb 2018 01:34:43 INFO UnixAuthenticationService [main] - Enabling Protocol: [TLSv1.1]
20 Feb 2018 01:34:43 INFO UnixAuthenticationService [main] - Enabling Protocol: [TLSv1.2]I have configured ldap as sync_source in install.properties. I have attached the config file. Still no user or group synching in ranger ui. Please help!
... View more
Labels:
12-12-2017
11:19 PM
I am trying to write java program using Knox shell classes. I am interested to knox how SSO can be possible in hadoop sessions or tokens can be passed instead of credentials in Hadoop session ?
... View more
Labels:
12-07-2017
10:17 PM
I am trying to understand complete HDFS file read workflow over http in case of webhdfs. When HTTP client request to read a file, the request goes to Namenode. Namenode responds back to client with the datanode address(block location) with block access token for client's authentication. This reponse of Namenode to client is in the form of redirect and with this redirect client send request to datanodes to read required blocks. Now my question is, how data streams are happening from datanode to client?
... View more
Labels:
12-01-2017
06:46 PM
I am installing Ranger through Ambari but its failing while "Install,Start and Test" as shown in attached picture. Below is the log for the same: stdOUT: 2017-12-01 12:46:53,991 [JISQL] /usr/lib/jvm/java-8-openjdk-amd64/bin/java -cp /usr/hdp/current/ranger-admin/ews/lib/mysql-connector-java.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver mysqlconj -cstring jdbc:mysql://mastervm/ranger -u 'rangeradmin' -p '********' -noheader -trim -c \; -query "update x_db_version_h set active='Y' where version='J10007' and active='N' and updated_by='mastervm';"2017-12-01 12:46:55,100 [I] java patch PatchForHiveServiceDefUpdate_J10007 is applied.. 2017-12-01 12:46:55,100 [JISQL] /usr/lib/jvm/java-8-openjdk-amd64/bin/java -cp /usr/hdp/current/ranger-admin/ews/lib/mysql-connector-java.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver mysqlconj -cstring jdbc:mysql://mastervm/ranger -u 'rangeradmin' -p '********' -noheader -trim -c \; -query "select version from x_db_version_h where version = 'J10008' and active = 'Y';"2017-12-01 12:46:56,163 [JISQL] /usr/lib/jvm/java-8-openjdk-amd64/bin/java -cp /usr/hdp/current/ranger-admin/ews/lib/mysql-connector-java.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver mysqlconj -cstring jdbc:mysql://mastervm/ranger -u 'rangeradmin' -p '********' -noheader -trim -c \; -query "select version from x_db_version_h where version = 'J10008' and active = 'N';"2017-12-01 12:46:57,237 [JISQL] /usr/lib/jvm/java-8-openjdk-amd64/bin/java -cp /usr/hdp/current/ranger-admin/ews/lib/mysql-connector-java.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver mysqlconj -cstring jdbc:mysql://mastervm/ranger -u 'rangeradmin' -p '********' -noheader -trim -c \; -query "insert into x_db_version_h (version, inst_at, inst_by, updated_at, updated_by,active) values ('J10008', now(), 'Ranger 0.7.0.2.6.2.0-205', now(), 'mastervm','N') ;"2017-12-01 12:46:58,431 [I] java patch PatchForTagServiceDefUpdate_J10008 is being applied..2017-12-01 12:47:33,831 [JISQL] /usr/lib/jvm/java-8-openjdk-amd64/bin/java -cp /usr/hdp/current/ranger-admin/ews/lib/mysql-connector-java.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver mysqlconj -cstring jdbc:mysql://mastervm/ranger -u 'rangeradmin' -p '********' -noheader -trim -c \; -query "update x_db_version_h set active='Y' where version='J10008' and active='N' and updated_by='mastervm';"2017-12-01 12:47:34,820 [I] java patch PatchForTagServiceDefUpdate_J10008 is applied.. 2017-12-01 12:47:34,821 [JISQL] /usr/lib/jvm/java-8-openjdk-amd64/bin/java -cp /usr/hdp/current/ranger-admin/ews/lib/mysql-connector-java.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver mysqlconj -cstring jdbc:mysql://mastervm/ranger -u 'rangeradmin' -p '********' -noheader -trim -c \; -query "select version from x_db_version_h where version = 'J10009' and active = 'Y';" Command failed after 1 tries STDError : Python script has been killed due to timeout after waiting 600 secs
... View more
Labels:
11-29-2017
09:38 PM
@Kit Menke could you please explain why we cann't use FileSystem API? I went through the link you provided above (More info here) but didn't quite understand. I am specifically looking to use Java Api for Knox instead of HTTP client.
... View more
11-05-2017
09:34 PM
@Pravin Bhagade Thanks for answering! So does your answer implies that I have to provide credentials every time I request access to webdhdfs using curl commands. Can I have a kerberoes authentication setup with the demo ldap to bypass credentials with each request?
... View more