Member since
09-11-2015
115
Posts
126
Kudos Received
15
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3057 | 08-15-2016 05:48 PM | |
2873 | 05-31-2016 06:19 PM | |
2416 | 05-11-2016 03:10 PM | |
1869 | 05-10-2016 07:06 PM | |
4761 | 05-02-2016 06:25 PM |
11-12-2015
02:46 PM
1 Kudo
I figured something like haproxy or nginx would work. Preferably looking for an example config, or if anyone has extended Knox with a custom provider then even better.
... View more
11-11-2015
05:05 PM
1 Kudo
Knox 0.6.0 has built-in support for these 7 services:
WebHDFS WebHCat Oozie HBase Hive Yarn Storm Is there a recommended approach to expose other services from the gateway host? Particularly web UIs, such as Ambari & Ranger.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Knox
-
Apache Ranger
11-11-2015
04:16 PM
3 Kudos
SYMPTOM: For a Capacity Scheduler queue that specifies some groups in its acl_submit_applications property, a user who is not a member of any of those groups is still able to submit jobs to the queue.
ROOT CAUSE: By default the root queue is allow-all, which results in all child queues defaulting to allow-all. The acl_submit_applications property is described as: The ACL which controls who can submit applications to the given queue. If the given user/group has necessary ACLs on the given queue or one of the parent queues in the hierarchy they can submit applications. ACLs for this property are inherited from the parent queue if not specified. SOLUTION: Set the root queue to deny-all, by entering a "space" for the value. Then set who to allow in the ACL for each child queue. For example: yarn.scheduler.capacity.root.acl_submit_applications=
yarn.scheduler.capacity.root.default.acl_administer_jobs=appdev
yarn.scheduler.capacity.root.default.acl_submit_applications=appdev
yarn.scheduler.capacity.root.system.acl_administer_jobs=dbadmin
yarn.scheduler.capacity.root.system.acl_submit_applications=dbadmin
... View more
Labels:
11-11-2015
03:14 PM
Now that the Garbage-First garbage collector is fully supported by Oracle, have we seen anyone using it for production clusters? Is it officially supported by Hortonworks when using Java 8?
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
11-10-2015
04:56 AM
3 Kudos
SYMPTOM: Attempting to submit a Pig job via WebHCat that defines parameter(s) for substitution as command-line arguments results in an "incorrect usage" message and job does not run. Doing the same through Hue results in an "undefined parameter" message and job does not run.
ROOT CAUSE: If the parameter is passed to curl as a single argument (-d 'arg=-param paramName=paramValue') it is interpreted incorrectly by Pig. Submitting the parameter via Hue as a single argument has the same unwanted effect. WORKAROUND: Pass the parameter as two arguments: curl -d file=myScript.pig -d 'arg=-param' -d 'arg=paramName=paramValue' 'http://<server>:50111/templeton/v1/pig' To achieve the same using Hue, pass two arguments in sequence (refer to attached image for an example). RESOLUTION: WebHCat works as designed. This issue is a limitation of curl. The Hue workaround is good for a single parameter, however multiple parameters may not work.
... View more
Labels:
11-10-2015
02:58 AM
I was mistakenly using the HDP 2.3.0 Sandbox, which uses Ambari 2.1.0. Your advice worked perfectly in the latest version. Thanks!
... View more
11-10-2015
02:56 AM
Ambari attempts to determine whether the demo LDAP server supports paged results, which it does not, so it responds with UNAVAILABLE_CRITICAL_EXTENSION. The demo LDAP server in Knox 0.6.0 (HDP 2.3.0) is based on ApacheDS 2.0.0-M15. Support for paged results was added in version 2.0.0-M13 (DIRSERVER-434), so I'm not sure why this wouldn't work. It's unlikely to be solved by configuration though.
... View more
11-10-2015
02:56 AM
4 Kudos
Here's a complete guide, thanks to @Paul Codding's advice to disable pagination. Requires HDP Sandbox 2.3.2 or later (Ambari 2.1.1+) 1. In Ambari, start the demo LDAP server (Knox gateway is not required):
Knox > Service Actions > Start Demo LDAP 2. Follow the Ambari Security Guide to enable LDAP (press Enter for blank values)... [root@sandbox ~]# ambari-server setup-ldap
Using python /usr/bin/python2.6
Setting up LDAP properties...
Primary URL* {host:port} : sandbox.hortonworks.com:33389
Secondary URL {host:port} :
Use SSL* [true/false] (false): false
User object class* (posixAccount): person
User name attribute* (uid): uid
Group object class* (posixGroup): groupofnames
Group name attribute* (cn): cn
Group member attribute* (memberUid): member
Distinguished name attribute* (dn): dn
Base DN* : dc=hadoop,dc=apache,dc=org
Referral method [follow/ignore] :
Bind anonymously* [true/false] (false): false
Manager DN* : uid=guest,ou=people,dc=hadoop,dc=apache,dc=org
Enter Manager Password* : guest-password
Re-enter password: guest-password
====================
Review Settings
====================
authentication.ldap.managerDn: uid=guest,ou=people,dc=hadoop,dc=apache,dc=org
authentication.ldap.managerPassword: *****
Save settings [y/n] (y)? y
Saving...done
Ambari Server 'setup-ldap' completed successfully.
3. Configure Ambari to disable pagination, and restart Ambari Server: [root@sandbox ~]# echo "authentication.ldap.pagination.enabled=false" >> /etc/ambari-server/conf/ambari.properties
[root@sandbox ~]# ambari-server restart
4. When Ambari startup completes, the objects in /etc/knox/conf/users.ldif are available in Ambari. Here’s a quick reference:
admin / admin-password guest / guest-password sam / sam-password tom / tom-password Note: LDAP accounts with the same names as local accounts will replace the local accounts. The admin password will now be 'admin-password' instead of 'admin' 5. To customize the demo LDAP directory:
In Ambari: Knox > Service Actions > Stop Demo LDAP Edit /etc/knox/conf/users.ldif Start the LDAP server manually (Ambari will overwrite users.ldif) nohup su - knox -c 'java -jar /usr/hdp/current/knox-server/bin/ldap.jar /usr/hdp/current/knox-server/conf' &
Synchronize LDAP Users & Groups (see console output below)... [root@sandbox ~]# ambari-server sync-ldap --all
Using python /usr/bin/python2.6
Syncing with LDAP...
Enter Ambari Admin login: admin
Enter Ambari Admin password: admin-password
Syncing all...
Completed LDAP Sync.
Summary:
memberships:
removed = 0
created = 2
users:
updated = 0
removed = 1
created = 3
groups:
updated = 2
removed = 0
created = 0
Ambari Server 'sync-ldap' completed successfully.
... View more
11-09-2015
06:13 PM
1 Kudo
Unfortunately config groups are only applicable when the HS2 instances are on different hosts.
... View more
11-09-2015
06:12 PM
You can manually startup the second HS2 instance and use --hiveconf to override some of the properties from the standard config.
... View more