Member since
08-29-2018
27
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3863 | 09-19-2019 01:32 AM |
11-16-2021
05:03 AM
Hello. I'm trying to add users to Ranger via RESTAPI, but I could only add one singular user at a time. This is the command I'm using with a json file curl -u admin:$PASSWORD -i -X POST -H "Accept: application/json" -H "Content-Type: application/json" https://$RANGER_URL:6182/service/xusers/secure/users -d @users_RESTAPI.json -vvv And the json file as the following { "name" : "user_1" , "firstName" : "" , "lastName" : "" , "loginId" : "user_1" , "emailAddress" : "" , "description" : "" , "password" : "pass123" , "groupIdList" : [ 3 ] , "status" :1, "isVisible" :1, "userRoleList" : [ "ROLE_USER" ] , "userSource" : 0 } , { "name" : "user_2" , "firstName" : "" , "lastName" : "" , "loginId" : "user_1" , "emailAddress" : "" , "description" : "" , "password" : "pass123" , "groupIdList" : [ 3 ] , "status" :1, "isVisible" :1, "userRoleList" : [ "ROLE_USER" ] , "userSource" : 0 } Only the first user is added, the following entries are ignored. Do the users need to be added one by one via RESTAPI? Thanks
... View more
Labels:
- Labels:
-
Apache Ranger
11-08-2019
12:16 AM
Hi, Not sure what you mean I have already running a HUE instance with LDAP backend authentication enabled, but i don't want to have the entry in hue.ini # Password of the bind user -- not necessary if the LDAP server supports
# anonymous searches
bind_password=PASSWORDINPLAINTEXT I wanted to have it encrypted. I know there's a way to have the passwords in an external file like the link i post it the int original question, but still they will be in plaintext.
... View more
11-05-2019
01:36 AM
Hi,
I have a standalone (e.g. not configured with CM) HUE instance running, connected to my HDP cluster.
The passwords in hue.ini conf file are all in plaintext (database and LDAP passwords)
Does HUE provides a way to have those passwords encrypted?
I know you can store all the passwords in a separate file as described inhttp://gethue.com/storing-passwords-in-script-rather-than-hue-ini-files/ , but they will still be in plaintext.
Thanks
Paula
... View more
Labels:
- Labels:
-
Cloudera Hue
09-19-2019
01:36 AM
Sorry, forgot to add the port the correct way will be hdfs://nameservice:8020/ranger/audit
... View more
09-19-2019
01:32 AM
Hi, this is what i did: In Ambari, select Kafka→ Configs→ Advanced ranger-kafka-audit and add the dfs destination dir (if you have NameNode HA, you need to add to each kafka broker the hdfs-site.xml that has the nameservice property, so the audit logs should always hit the active namenode) For example if you have defined the fs.defaultFS=nameservice you will add something like xasecure.audit.destination.hdfs.dir=hdfs://nameservice/ranger/audit Then restart the brokers. Hope it helps
... View more
09-19-2019
01:18 AM
1 Kudo
Actually i didn't share because i didn't get the notification about this message. Obviously will do it. Best Paula
... View more
04-09-2019
07:22 AM
I followed the steps in this link https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.2.0/installing-hdf/content/install-ambari.html
... View more
04-02-2019
12:41 PM
Hello, I have a scenario with a Hadoop cluster installed with HDP2.6.5 and a Kafka cluster installed with HDF 3.3.0 with Ranger Service configured. I want to store the Ranger Audit logs in HDFS so I setup in kafka the property xasecure.audit.destination.hdfs.dir pointing to the HDFS directory. Case one: when using the namenode in the URI the logs are stored in HDFS successfully (xasecure.audit.destination.hdfs.dir=hdfs://namenode_FQDN>:8020/ranger/audit) Case two: Using a haproxy, since i have namenode HA enabled and want to point always to the active NN, i get the following error 2019-04-02 12:00:13,841 ERROR [kafka.async.summary.multi_dest.batch_kafka.async.summary.multi_dest.batch.hdfs_destWriter] org.apache.ranger.audit.provider.BaseAuditHandler (BaseAuditHandler.java:329) - Error writing to log file.
java.io.IOException: DestHost:destPort <ha_proxy_hostname>:8085 , LocalHost:localPort <kafka_broker_hostname>/10.212.164.50:0. Failed on local exception: java.io.IOException: org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length Is there any extra config to be set? Thanks
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Kafka
-
Apache Ranger