Member since
08-29-2018
27
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2794 | 09-19-2019 01:32 AM |
11-16-2021
05:03 AM
Hello. I'm trying to add users to Ranger via RESTAPI, but I could only add one singular user at a time. This is the command I'm using with a json file curl -u admin:$PASSWORD -i -X POST -H "Accept: application/json" -H "Content-Type: application/json" https://$RANGER_URL:6182/service/xusers/secure/users -d @users_RESTAPI.json -vvv And the json file as the following { "name" : "user_1" , "firstName" : "" , "lastName" : "" , "loginId" : "user_1" , "emailAddress" : "" , "description" : "" , "password" : "pass123" , "groupIdList" : [ 3 ] , "status" :1, "isVisible" :1, "userRoleList" : [ "ROLE_USER" ] , "userSource" : 0 } , { "name" : "user_2" , "firstName" : "" , "lastName" : "" , "loginId" : "user_1" , "emailAddress" : "" , "description" : "" , "password" : "pass123" , "groupIdList" : [ 3 ] , "status" :1, "isVisible" :1, "userRoleList" : [ "ROLE_USER" ] , "userSource" : 0 } Only the first user is added, the following entries are ignored. Do the users need to be added one by one via RESTAPI? Thanks
... View more
Labels:
- Labels:
-
Apache Ranger
04-29-2020
12:03 AM
Can anyone give an update on this? Thank you
... View more
04-27-2020
04:49 AM
Hello, I was trying to access the repo to upgrade the HDF version, but it's required authentication credentials. How can i request these credentials? https://archive.cloudera.com/p/HDF/centos7/3.x/updates/3.5.0.0 Thanks Paula
... View more
Labels:
- Labels:
-
Cloudera DataFlow (CDF)
01-15-2020
11:54 PM
Thanks for your answer, But i was looking into have configured the Cache-Control. I see that in your example you still have Cache-Control=no-cache and i wanted to add more settings into this header. Paula
... View more
01-15-2020
05:48 AM
Hello,
I wanted to configure the Cache-Control header in Knox, but i can't find if there's any setting to do so.
I want to define
Cache-Control: no-cache, no-store, must-revalidate
instead of
Cache-Control: no-cache
Is it possible to have this setting configured?
Thanks
... View more
- Tags:
- Cache-Control
- Knox
Labels:
- Labels:
-
Apache Knox
11-08-2019
12:16 AM
Hi, Not sure what you mean I have already running a HUE instance with LDAP backend authentication enabled, but i don't want to have the entry in hue.ini # Password of the bind user -- not necessary if the LDAP server supports
# anonymous searches
bind_password=PASSWORDINPLAINTEXT I wanted to have it encrypted. I know there's a way to have the passwords in an external file like the link i post it the int original question, but still they will be in plaintext.
... View more
11-05-2019
01:36 AM
Hi,
I have a standalone (e.g. not configured with CM) HUE instance running, connected to my HDP cluster.
The passwords in hue.ini conf file are all in plaintext (database and LDAP passwords)
Does HUE provides a way to have those passwords encrypted?
I know you can store all the passwords in a separate file as described inhttp://gethue.com/storing-passwords-in-script-rather-than-hue-ini-files/ , but they will still be in plaintext.
Thanks
Paula
... View more
Labels:
- Labels:
-
Cloudera Hue
09-20-2019
02:51 AM
There was an issue one of the Kafka brokers that was keeping some topics from being read. The wildcards are woking fine in Ranger.
... View more
09-19-2019
01:36 AM
Sorry, forgot to add the port the correct way will be hdfs://nameservice:8020/ranger/audit
... View more
09-19-2019
01:32 AM
Hi, this is what i did: In Ambari, select Kafka→ Configs→ Advanced ranger-kafka-audit and add the dfs destination dir (if you have NameNode HA, you need to add to each kafka broker the hdfs-site.xml that has the nameservice property, so the audit logs should always hit the active namenode) For example if you have defined the fs.defaultFS=nameservice you will add something like xasecure.audit.destination.hdfs.dir=hdfs://nameservice/ranger/audit Then restart the brokers. Hope it helps
... View more
09-19-2019
01:18 AM
1 Kudo
Actually i didn't share because i didn't get the notification about this message. Obviously will do it. Best Paula
... View more
09-19-2019
12:56 AM
Hello,
I'm trying to create some ACL rules in Ranger for Kafka topics.
Since the user wants to access several topics that start with the same name, i tried to add the topic_name* in the topic field, but it doesn't work. Only if i put the topic full name.
I tried to look in the documentation and it seems that Ranger accepts wildcards, but i couldn't find any example. Can some advise on this?
Thank you
Paula
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache Ranger
04-09-2019
07:22 AM
I followed the steps in this link https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.2.0/installing-hdf/content/install-ambari.html
... View more
04-02-2019
12:41 PM
Hello, I have a scenario with a Hadoop cluster installed with HDP2.6.5 and a Kafka cluster installed with HDF 3.3.0 with Ranger Service configured. I want to store the Ranger Audit logs in HDFS so I setup in kafka the property xasecure.audit.destination.hdfs.dir pointing to the HDFS directory. Case one: when using the namenode in the URI the logs are stored in HDFS successfully (xasecure.audit.destination.hdfs.dir=hdfs://namenode_FQDN>:8020/ranger/audit) Case two: Using a haproxy, since i have namenode HA enabled and want to point always to the active NN, i get the following error 2019-04-02 12:00:13,841 ERROR [kafka.async.summary.multi_dest.batch_kafka.async.summary.multi_dest.batch.hdfs_destWriter] org.apache.ranger.audit.provider.BaseAuditHandler (BaseAuditHandler.java:329) - Error writing to log file.
java.io.IOException: DestHost:destPort <ha_proxy_hostname>:8085 , LocalHost:localPort <kafka_broker_hostname>/10.212.164.50:0. Failed on local exception: java.io.IOException: org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length Is there any extra config to be set? Thanks
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Kafka
-
Apache Ranger
01-14-2019
01:24 PM
Hi, Thanks for your reply. But is this valid when the kerberos is configured with the option "KDC Type: Manage Kerberos principals and keytabs manually"? This documentation refers if ambari is managing the KDC. When i configured the cluster manually with Kerberos, the CSV file generated doesn't contain the admin principal. Since ambari is not responsible for creating the principals/keytabs, do we really need this when uploading the blueprint? Thanks Paula
... View more
01-10-2019
04:01 PM
Hello, I'm trying to automate the creation of a kerberized HDF cluster using openstack VMs, using blueprints. The principals are not managed by ambari, I first configured the cluster manually and then downloaded the blueprint. When i try to create a cluster with that blueprint i get the following error curl -H "X-Requested-By ambari" -X POST -u admin:admin http://amabri-host:8080/api/v1/clusters/clusterName -d @hostmapping.json
{
"status" : 400,
"message" : "Topology validation failed: org.apache.ambari.server.topology.InvalidTopologyException: kdc.admin.credential is missing from request."
Do i need to add any extra param to the curl request?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Cloudera DataFlow (CDF)
10-04-2018
06:22 AM
Is there any way to do the migration of the database without hiting this errors?
... View more
08-30-2018
01:01 AM
Hello, i installed HUE4.2 version compiling the code from git. It works all fine, but when i try to load the data from the old HUE (3.9.0) i get an error regarding a non existing field Before upgrading i did the following steps 1-Stop HUE 2- backup the db ( hue/build/env/bin/hue dumpdata > ./hue-mysql.json) 3- dropped the database 4- Created a new empty database CREATE DATABASE huedb DEFAULT CHARACTER SET utf8 DEFAULT COLLATE = utf8_bin ; 5- Synchronized the new dabase hue/build/env/bin/hue syncdb --noinput hue/build/env/bin/hue migrate 6- And loaded the data previously backup hue/build/env/bin/hue loaddata ./hue-mysql.json I get this error File "/app/hue/build/env/lib/python2.7/site-packages/Django-1.11-py2.7.egg/django/db/models/options.py", line 619, in get_field raise FieldDoesNotExist("%s has no field named '%s'" % (self.object_name, field_name)) django.core.serializers.base.DeserializationError: Problem installing fixture '/app/temp/hue-dump.json': ContentType has no field named 'name' I compared the tables from the 3.9 and 4.2 versions and i see a difference in the fields mysql> describe django_content_type; 3.9 +-----------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | name | varchar(100) | NO | | NULL | | | app_label | varchar(100) | NO | MUL | NULL | | | model | varchar(100) | NO | | NULL | | +-----------+--------------+------+-----+---------+----------------+ 4.2.0 mysql> describe django_content_type; +-----------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | app_label | varchar(100) | NO | MUL | NULL | | | model | varchar(100) | NO | | NULL | | +-----------+--------------+------+-----+---------+----------------+ The name field is not present anymore. Is there any extra step that i should considered when migrating the database: I'm using a mySQL standalone DB (not using CDH) Thank you Paula
... View more
Labels:
- Labels:
-
Hue
06-14-2016
02:17 PM
I face the same problem, the alert with dead nodes doesn't clear. The nodes were decommissioned and removed from the cluster, ambari agent is not running and i also ran the refreshnodes command. Which services may require restart? Thanks
... View more
12-21-2015
08:13 AM
Thanks Neeraj, After looking carefully to the blueprint I had some wrong mappings between hosts groups and the ha configuration. Best Paula
... View more
12-17-2015
10:05 AM
2 Kudos
I'm implemented a way to spawn a cluster in an automate way (launching VMS, install all the required RPMs and configure the cluster, using Ansible). The final part of configuring the cluster is done via blueprint upload through the API. I tried to apply a blueprint with NameNode and RM HA enabled, but after all the services are started, I have some alerts and not all of the processes come up (e.g. NameNode, RM). So my question is if blueprints with HA enabled can't be uploaded? Or, if is possible, do we need to do some extra steps prior to upload the blueprint? Thanks Paula
... View more
Labels:
- Labels:
-
Apache Ambari