Member since
03-15-2018
27
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
931 | 05-16-2018 07:51 AM |
06-04-2018
06:57 PM
@Vinay K In a one-way trust between a trusted domain (AD Domain) and a trusting domain (MIT KDC), users or computers in the trusted domain can access resources in the trusting domain. However, users in the trusting domain cannot access resources in the trusted domain. So basically you tell your MIT KDC to trust the users in the AD to access resources in your cluster. Service access happens the same way as for MIT KDC users. Service will ask Kerberos to authenticate, if that user is authenticated to use that service, Kerberos will check the domain of the user and accordingly if that user is from a trusted domain, Kerberos will ask the AD/LDAP to authenticate and if AD authenticates, Kerberos trusts that user and so does your service.
... View more
06-02-2018
12:39 PM
Well, the configuration files were correct, but the environment was not set properly. Checked hbase env on both nodes and found a difference. Update with the following properties in ambari and it worked: export LD_LIBRARY_PATH=::/usr/hdp/2.6.3.0-235/hadoop/lib/native/Linux-amd64-64:/usr/lib/hadoop/lib/native/Linux-amd64-64:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:/usr/hdp/2.6.3.0-235/hadoop/lib/native
export HADOOP_HOME=/usr/hdp/2.6.3.0-235/hadoop
export HADOOP_CONF_DIR=/usr/hdp/2.6.3.0-235/hadoop/etc/hadoop
... View more
06-01-2018
09:45 AM
@schhabra I checked hbase-site.xml, hdfs-site.xml and core-site.xml. They are exactly same on both nodes.
... View more
05-31-2018
02:06 PM
1 Kudo
@Krishna Pandey Thanks. It worked. Need to give read permission on /etc/shadow to user Knox. Better if we create ACLs for it.
... View more
05-31-2018
01:07 PM
@Krishna Pandey Yes, the permissions to the topology file were not correct. But now I'm getting this error HTTP/1.1 401 Unauthorized
Date: Thu, 31 May 2018 13:07:02 GMT
Set-Cookie: rememberMe=deleteMe; Path=/gateway/pamtest; Max-Age=0; Expires=Wed, 30-May-2018 13:07:04 GMT
WWW-Authenticate: BASIC realm="application"
Content-Length: 0
Server: Jetty(9.2.15.v20160210)
The cluster is kerberized as well.
... View more
05-31-2018
12:10 PM
@Krishna Pandey Linux distro is Centos 7. I tried with PAM Authentication. I am getting HTTP 404 error.
... View more
05-30-2018
02:11 PM
We have start demo LDAP to access services using Knox gateway. But I want to access those services using my Unix/Posix users, which are already created.
... View more
Labels:
- Labels:
-
Apache Knox
05-28-2018
07:00 AM
After Namenode HA, 2 out of my 3 Region Servers in HBase are not coming up. I looked at the logs and found that it is throwing unknown host exception for name service. 2018-05-24 08:48:29,551 INFO [regionserver/atlhashdn02.hashmap.net/192.166.4.37:16020] regionserver.HRegionServer: STOPPED: Failed initialization
2018-05-24 08:48:29,552 ERROR [regionserver/atlhashdn02.hashmap.net/192.166.4.37:16020] regionserver.HRegionServer: Failed init
java.lang.IllegalArgumentException: java.net.UnknownHostException: clusterha
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:411)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:311)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:688)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:629)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:159)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2761)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2795)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2777)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:386)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:179)
at org.apache.hadoop.hbase.wal.DefaultWALProvider.init(DefaultWALProvider.java:97)
at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:148)
at org.apache.hadoop.hbase.wal.WALFactory.<init>(WALFactory.java:180)
at org.apache.hadoop.hbase.regionserver.HRegionServer.setupWALAndReplication(HRegionServer.java:1648)
at org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1381)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:917)
at java.lang.Thread.run(Thread.java:745)
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache HBase
05-18-2018
08:42 AM
@Mike Wong I'd recommend disabling selinux and rebooting the machines. After that look into hdfs logs make sure hdfs is up with no alerts. Try restarting hdfs. All other services would come up then.
... View more
05-18-2018
07:12 AM
@Mike Wong Please check your /etc/hosts on all nodes. And also if selinux is disabled using getenforce command.
... View more