Member since
09-24-2015
816
Posts
488
Kudos Received
189
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3173 | 12-25-2018 10:42 PM | |
| 14193 | 10-09-2018 03:52 AM | |
| 4764 | 02-23-2018 11:46 PM | |
| 2481 | 09-02-2017 01:49 AM | |
| 2914 | 06-21-2017 12:06 AM |
05-23-2016
06:54 AM
Terribly sorry, haven't seen that comment. Then, DN couldn't detect your disk as failed.
... View more
05-23-2016
06:49 AM
Can you check your setting for dfs.datanode.failed.volumes.tolerated in hdfs-site.xml? DN will shut itself down if the number of detected failed disks is larger than this setting. If you want DN to stop on a first failed disk, be sure to set this property to zero. Now, if it's already zero, something else could be wrong.
... View more
05-21-2016
11:20 AM
Also tried to set authentication.ldap.pagination.enabled=false but to no avail. BTW, the LDAP is on AD.
... View more
05-21-2016
08:24 AM
1 Kudo
"ambari-server sync-ldap --users file" against an LDAP server with more than 10,000 users fails saying one of the users in the file is not there. When I do ldapsearch from the command line without filter, that user is not returned, because I guess LDAP server returns max of 2000 entities. When I do ldapsearch with a filter I can find him. How can I tell Ambari to do such search using a filter? ldapsearch returns distinguishedName: CN=user123456,OU=users,DC=example,DC=com For ldapsearch I provide "(CN=user123456)" as my filter. In setup-ldap I do like below, but it doesn't work. Any ideas. authentication.ldap.baseDn="OU=users,DC=example,DC=com"
authentication.ldap.usernameAttribute=CN
authentication.ldap.dnAttribute=distinguishedName
authentication.ldap.userObjectClass=organizationalPerson ... have 4 classes listed: top,person,organizationlPerson, user; also tried user
authentication.ldap.referral=ignore ... also tried follow When I try to sync with one of the users returned using ldapserach without filter it works.
... View more
Labels:
- Labels:
-
Apache Ambari
05-21-2016
01:30 AM
Knox supports such a use case by means of multiple topology files. However, Ambari supports only one single topology file, for the cluster managed by Ambari in which Knox is running as a service. For your requirements it's the best to install Knox stand-alone and configure it manually. It's an easy operation, you can find details here.
... View more
05-21-2016
01:04 AM
1 Kudo
In Knox, you can create two or more topology files, and specify different LDAP seraver in each of them. End users can select which LDAP server to use by specifying one of those file names in the Knox URL. Specifiying two or more LDAP or any other authentication providers in the same topology file is not supported, for more details see here. Ranger also supports only one LDAP provider. For initial user-sync you can sync with one LDAP server, and then change settings and sync with the other. However, for subsequent user-syncs Ranger will use only the single LDAP server currently set.
... View more
05-19-2016
04:10 AM
@mark doutre I've just found a new blog post talking about your use-case of storing Avro schema-less objects in HBase. It's implemented by direct interaction with HBase, without Hive. The code appears to be simple. HTH
... View more
05-19-2016
04:04 AM
Hi @Stephen Redmond, sorry I missed your comment. No, haven't done tests with compression. I'll let you know if I find something. Also, you can file a question on HCC, copying your comment, to get wider attention. Tnx.
... View more
05-19-2016
03:24 AM
2 Kudos
Hi @darkz yu, you need to perform the so-called Takeover by Ambari, check this for your options. The easiest way to do it, provided you don't have a lot of data is a variant of option 1 from that post, meaning to export all important data from your current cluster, setup a new cluster using Ambari, and then import your data into new cluster. Note that you will need to export/import hdfs files, Hive tables, and HBase tables separately. If you want to do it keeping your data "in place" then you can consider options 2 and 3: takeover using Ambari REST API, or by using a dummy cluster. Both are more complicated than option 1.
... View more
05-19-2016
12:44 AM
Hi @mike pal, have you resolved this? As Artem suggested you can put it in your workflow lib directory. You can also put it in Oozie Hive share lib directory, or in your home directory in hdfs and declare its location using <job-xml> tag in your workflow.
... View more