Member since
08-29-2018
27
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6486 | 09-19-2019 01:32 AM |
11-16-2021
05:03 AM
Hello. I'm trying to add users to Ranger via RESTAPI, but I could only add one singular user at a time. This is the command I'm using with a json file curl -u admin:$PASSWORD -i -X POST -H "Accept: application/json" -H "Content-Type: application/json" https://$RANGER_URL:6182/service/xusers/secure/users -d @users_RESTAPI.json -vvv And the json file as the following { "name":"user_1", "firstName":"", "lastName": "", "loginId": "user_1", "emailAddress" : "", "description" : "", "password" : "pass123", "groupIdList":[3], "status":1, "isVisible":1, "userRoleList": [ "ROLE_USER" ], "userSource": 0 }, { "name":"user_2", "firstName":"", "lastName": "", "loginId": "user_1", "emailAddress" : "", "description" : "", "password" : "pass123", "groupIdList":[3], "status":1, "isVisible":1, "userRoleList": [ "ROLE_USER" ], "userSource": 0 } Only the first user is added, the following entries are ignored. Do the users need to be added one by one via RESTAPI? Thanks
... View more
Labels:
- Labels:
-
Apache Ranger
09-19-2019
01:36 AM
Sorry, forgot to add the port the correct way will be hdfs://nameservice:8020/ranger/audit
... View more
09-19-2019
01:32 AM
Hi, this is what i did: In Ambari, select Kafka→ Configs→ Advanced ranger-kafka-audit and add the dfs destination dir (if you have NameNode HA, you need to add to each kafka broker the hdfs-site.xml that has the nameservice property, so the audit logs should always hit the active namenode) For example if you have defined the fs.defaultFS=nameservice you will add something like xasecure.audit.destination.hdfs.dir=hdfs://nameservice/ranger/audit Then restart the brokers. Hope it helps
... View more
09-19-2019
01:18 AM
1 Kudo
Actually i didn't share because i didn't get the notification about this message. Obviously will do it. Best Paula
... View more
04-09-2019
07:22 AM
I followed the steps in this link https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.2.0/installing-hdf/content/install-ambari.html
... View more
04-02-2019
12:41 PM
Hello, I have a scenario with a Hadoop cluster installed with HDP2.6.5 and a Kafka cluster installed with HDF 3.3.0 with Ranger Service configured. I want to store the Ranger Audit logs in HDFS so I setup in kafka the property xasecure.audit.destination.hdfs.dir pointing to the HDFS directory. Case one: when using the namenode in the URI the logs are stored in HDFS successfully (xasecure.audit.destination.hdfs.dir=hdfs://namenode_FQDN>:8020/ranger/audit) Case two: Using a haproxy, since i have namenode HA enabled and want to point always to the active NN, i get the following error 2019-04-02 12:00:13,841 ERROR [kafka.async.summary.multi_dest.batch_kafka.async.summary.multi_dest.batch.hdfs_destWriter] org.apache.ranger.audit.provider.BaseAuditHandler (BaseAuditHandler.java:329) - Error writing to log file.
java.io.IOException: DestHost:destPort <ha_proxy_hostname>:8085 , LocalHost:localPort <kafka_broker_hostname>/10.212.164.50:0. Failed on local exception: java.io.IOException: org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length Is there any extra config to be set? Thanks
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Kafka
-
Apache Ranger
06-14-2016
02:17 PM
I face the same problem, the alert with dead nodes doesn't clear. The nodes were decommissioned and removed from the cluster, ambari agent is not running and i also ran the refreshnodes command. Which services may require restart? Thanks
... View more
12-21-2015
08:13 AM
Thanks Neeraj, After looking carefully to the blueprint I had some wrong mappings between hosts groups and the ha configuration. Best Paula
... View more