Member since
03-22-2016
40
Posts
5
Kudos Received
0
Solutions
10-10-2017
10:57 AM
Hi @Mugdha, I am facing same kind of exception when tried to integrate Knox with AD on Kerberized cluster and followed ambari-server setup-security document which is suggested by you but still same exception remains. Log: cat /usr/hdp/current/knox-server/logs/gateway.log ERROR hadoop.gateway (AppCookieManager.java:getAppCookie(126)) - Failed Knox->Hadoop SPNegotiation authentication for URL: http://hostname1:50070/webhdfs/v1/?op=GETHOMEDIRECTORY&doAs=username WARN hadoop.gateway (DefaultDispatch.java:executeOutboundRequest(138)) - Connection exception dispatching request: http://hostname1:50070/webhdfs/v1/?op=GETHOMEDIRECTORY&doAs=username java.io.IOException: SPNego authn failed, can not get hadoop.auth cookie java.io.IOException: SPNego authn failed, can not get hadoop.auth cookie cat /usr/hdp/current/knox-server/conf/topologies/sample5.xml <topology> <gateway><provider> <role>authentication</role> <name>ShiroProvider</name> <enabled>true</enabled> <param name="main.ldapRealm" value="org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm"/> <param name="main.ldapContextFactory" value="org.apache.hadoop.gateway.shirorealm.KnoxLdapContextFactory"/> <param name="main.ldapRealm.contextFactory" value="$ldapContextFactory"/> <param name="main.ldapRealm.contextFactory.url" value="ldaps://abcd123:636"/> <param name="main.ldapRealm.contextFactory.systemUsername" value="testuser"/> <param name="main.ldapRealm.contextFactory.systemPassword" value="testpassword"/> <param name="main.ldapRealm.searchBase" value="DC=org,DC=apache,DC=com"/> <param name="main.ldapRealm.userSearchAttributeName" value="sAMAccountName"/> <param name="main.ldapRealm.userObjectClass" value="person"/> <param name="main.ldapRealm.authorizationEnabled" value="true"/> <param name="main.ldapRealm.groupSearchBase" value="OU=Service Accounts,OU=Applications,DC=org,DC=apache,DC=com"/> <param name="main.ldapRealm.groupObjectClass" value="group"/> <param name="main.ldapRealm.groupIdAttribute" value="sAMAccountName"/> <param name="main.ldapRealm.memberAttribute" value="member"/> <param name="main.cacheManager" value="org.apache.shiro.cache.ehcache.EhCacheManager"/> <param name="main.securityManager.cacheManager" value="$cacheManager"/> <param name="main.ldapRealm.authenticationCachingEnabled" value="true"/> <param name="urls./**" value="authcBasic"/> </provider> <provider> <role>authorization</role> <name>AclsAuthz</name> <enabled>true</enabled> </provider> <provider> <role>identity-assertion</role> <name>Default</name> <enabled>true</enabled> </provider> </gateway> <service> <role>NAMENODE</role> <url>hdfs://hostname1:8020</url> </service> <service> <role>JOBTRACKER</role> <url>rpc://hostname2:8050</url> </service> <service> <role>WEBHDFS</role> <url>http://hostname1:50070/webhdfs</url> </service> <service> <role>WEBHCAT</role> <url>http://hostname1:50111/templeton</url> </service> <service> <role>OOZIE</role> <url>http://hostname3:11000/oozie</url> </service> <service> <role>WEBHBASE</role> <url>http://hostname2:8080</url> </service> <service> <role>HIVE</role> <url>http://hostname1:10001/cliservice</url> </service> <service> <role>RESOURCEMANAGER</role> <url>http://hostname2:8088/ws</url> </service> <service> <role>KNOX</role> <url>hostname1</url> </service> </topology> url1: curl -u username:password -ik 'https://knoxhost:8443/gateway/sample5/api/v1/version' HTTP/1.1 200 OK Set-Cookie: JSESSIONID=123;Path=/gateway/sample5;Secure;HttpOnly Expires: Thu, 01 Jan 1970 00:00:00 GMT Content-Length: 169 Content-Type: application/xml Server: Jetty(8.1.14.v20131031) <?xml version="1.0" encoding="UTF-8"?> <ServerVersion> <version>0.6.0.2.4.3.0-227</version> <hash>12322</hash> </ServerVersion> url2: curl -u username:password -ik 'https://knoxhost:8443/gateway/sample5/webhdfs/v1?op=GETHOMEDIRECTORY'
<html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"/> <title>Error 500 Server Error</title> </head> <body><h2>HTTP ERROR 500</h2> <p>Problem accessing /gateway/sample5/webhdfs/v1. Reason: <pre> Server Error</pre></p><hr /><i><small>Powered by Jetty://</small></i><br/>
... View more
10-06-2017
07:37 AM
@Geoffrey Shelton Okot, Yes, that answers my doubt. Thank you so much for your response.
... View more
10-05-2017
01:47 PM
Hi @Geoffrey Shelton Okot I set my UsearSearchFilter as sAMAccountName={0}. With this I am able to login with AD user. But, I can't see anything in the UI other than Access Manager tab. I am not sure how these permissions are set. Can you please provide some more information on this?
... View more
10-05-2017
09:17 AM
Hi @Geoffrey Shelton Okot I have the proper settings. The authentication method is toggled to AD only. Actually, I am getting following error when I try to login with an AD user. error: "The username or password you entered is incorrect"
... View more
10-05-2017
07:52 AM
Hi, I am trying to login into Ranger UI with active directory users but I am not able to. However, I am able to login with default username:password - admin:admin. The error that I get when I try to login with an AD user is : "The username or password you entered is incorrect". Also, I am able to successfully sync AD users in Ranger, i.e, I am able to see AD users in Users/Groups tab. That means, I am guessing whatever configurations I have done are correct. I think I am missing some configuration for UI login. I am using Ambari version 2.2.2.0 and HDP version 2.4.3 Please suggest some solution. Thanks.
... View more
Labels:
- Labels:
-
Apache Ranger
09-21-2017
11:05 AM
Hi, @mqureshi My mysql process is up. The issue was missing keytabs. Now, my HIVE service is up and running. Thanks for your reply.
... View more
09-21-2017
11:03 AM
Hi @Geoffrey Shelton Okot Thanks for your reply. I saw hiverserver2.log. According to the error messages in the logs the keytabs were missing. After creating keytabs, HIVE service is started successfully.
... View more
09-20-2017
07:05 AM
Hi @mqureshi Nothing is running on port 9083. netstat -nlp | grep 9083 gives no output.
... View more
09-20-2017
05:22 AM
Hi , I am using a Kerberized Cluster in which Hive Metastore and Hive Server starts but stops after few minutes. When I check the logs of hive I get following error messages: 2017-09-20 06:06:00,514 ERROR [main]: metastore.HiveMetaStore (HiveMetaStore.java:main(5946)) - Metastore Thrift Server threw an exception...
org.apache.thrift.transport.TTransportException: Could not create ServerSocket on address 0.0.0.0/0.0.0.0:9083.
at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:109)
at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:91)
at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:83)
at org.apache.hadoop.hive.metastore.TServerSocketKeepAlive.<init>(TServerSocketKeepAlive.java:34)
at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:6001)
at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:5942)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
2017-09-20 06:06:00,520 INFO [Thread-4]: metastore.HiveMetaStore (HiveMetaStore.java:run(5931)) - Shutting down hive metastore.
2017-09-20 06:07:00,165 ERROR [pool-4-thread-200]: server.TThreadPoolServer (TThreadPoolServer.java:run(294)) - Thrift error occurred during processing of message.
org.apache.thrift.protocol.TProtocolException: Missing version in readMessageBegin, old client?
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:228)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:75)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
2017-09-20 06:07:01,631 ERROR [pool-4-thread-200]: server.TThreadPoolServer (TThreadPoolServer.java:run(294)) - Thrift error occurred during processing of message.
org.apache.thrift.protocol.TProtocolException: Missing version in readMessageBegin, old client?
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:228)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:75)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748) Please help.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hive
09-06-2017
01:20 PM
Hi @Pravin Bhagade, This solved my issue. I had not imported the SSL certificate in the keystore. After doing this, the AD authentication works. Thank you so much.
... View more