Member since
03-15-2018
27
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
584 | 05-16-2018 07:51 AM |
07-17-2018
12:28 PM
@Michael Bronson Document has recommended partition size for / and /var. /var mostly has your logs which can usually take up a lot of space. AFAIK swap should be disabled and swappiness should be 0.
... View more
07-10-2018
11:19 PM
Hi @Michael Bronson You can calculate the requirements based upon the amount and type of data using the following guide: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.3/bk_cluster-planning/bk_cluster-planning.pdf
... View more
07-10-2018
11:04 PM
@Lian Jiang I did authenticate Knox using PAM. I created an ACL to give read access only to knox user on /etc/shadow file. Alternatively, you can try creating a link to the /etc/shadow file and give read access on that link. Links that I referred to: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_security/content/setting_up_pam_authentication.html https://www.ibm.com/support/knowledgecenter/en/SSPT3X_4.2.5/com.ibm.swg.im.infosphere.biginsights.admin.doc/doc/admin_knox_ldap_pam.html
... View more
06-04-2018
06:57 PM
@Vinay K In a one-way trust between a trusted domain (AD Domain) and a trusting domain (MIT KDC), users or computers in the trusted domain can access resources in the trusting domain. However, users in the trusting domain cannot access resources in the trusted domain. So basically you tell your MIT KDC to trust the users in the AD to access resources in your cluster. Service access happens the same way as for MIT KDC users. Service will ask Kerberos to authenticate, if that user is authenticated to use that service, Kerberos will check the domain of the user and accordingly if that user is from a trusted domain, Kerberos will ask the AD/LDAP to authenticate and if AD authenticates, Kerberos trusts that user and so does your service.
... View more
06-02-2018
12:39 PM
Well, the configuration files were correct, but the environment was not set properly. Checked hbase env on both nodes and found a difference. Update with the following properties in ambari and it worked: export LD_LIBRARY_PATH=::/usr/hdp/2.6.3.0-235/hadoop/lib/native/Linux-amd64-64:/usr/lib/hadoop/lib/native/Linux-amd64-64:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:/usr/hdp/2.6.3.0-235/hadoop/lib/native
export HADOOP_HOME=/usr/hdp/2.6.3.0-235/hadoop
export HADOOP_CONF_DIR=/usr/hdp/2.6.3.0-235/hadoop/etc/hadoop
... View more
06-01-2018
09:45 AM
@schhabra I checked hbase-site.xml, hdfs-site.xml and core-site.xml. They are exactly same on both nodes.
... View more
05-31-2018
02:06 PM
1 Kudo
@Krishna Pandey Thanks. It worked. Need to give read permission on /etc/shadow to user Knox. Better if we create ACLs for it.
... View more
05-31-2018
01:07 PM
@Krishna Pandey Yes, the permissions to the topology file were not correct. But now I'm getting this error HTTP/1.1 401 Unauthorized
Date: Thu, 31 May 2018 13:07:02 GMT
Set-Cookie: rememberMe=deleteMe; Path=/gateway/pamtest; Max-Age=0; Expires=Wed, 30-May-2018 13:07:04 GMT
WWW-Authenticate: BASIC realm="application"
Content-Length: 0
Server: Jetty(9.2.15.v20160210)
The cluster is kerberized as well.
... View more
05-31-2018
12:10 PM
@Krishna Pandey Linux distro is Centos 7. I tried with PAM Authentication. I am getting HTTP 404 error.
... View more
05-30-2018
02:11 PM
We have start demo LDAP to access services using Knox gateway. But I want to access those services using my Unix/Posix users, which are already created.
... View more
Labels:
- Labels:
-
Apache Knox