Member since
03-22-2016
40
Posts
5
Kudos Received
0
Solutions
10-27-2017
01:13 PM
I have a Kerberized cluster (HDP version: 2.4.3 and Ambari version: 2.2.2.0) in which HBase service is installed. But, the HBase master stops after timeout with following error: hbase master log: 2017-10-27 12:56:56,437 ERROR [Thread-100] master.HMaster: Master failed to complete initialization after 900000ms. Please consider submitting a bug report including a thread dump of this process.
2017-10-27 13:11:56,437 ERROR [Thread-100] master.HMaster: Master failed to complete initialization after 900000ms. Please consider submitting a bug report including a thread dump of this process.
2017-10-27 13:22:23,320 FATAL [hostname:60000.activeMasterManager] master.HMaster: Failed to become active master
java.io.IOException: Timedout 2400000ms waiting for namespace table to be assigned
at org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:104)
at org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:1015)
at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:809)
at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:193)
at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1793)
at java.lang.Thread.run(Thread.java:748)
2017-10-27 13:22:23,321 FATAL [hostname:60000.activeMasterManager] master.HMaster: Master server abort: loaded coprocessors are: [org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor]
2017-10-27 13:22:23,321 FATAL [hostname:60000.activeMasterManager] master.HMaster: Unhandled exception. Starting shutdown.
java.io.IOException: Timedout 2400000ms waiting for namespace table to be assigned
at org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:104)
at org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:1015)
at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:809)
at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:193)
at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1793)
at java.lang.Thread.run(Thread.java:748) regionserver log: 2017-10-27 13:30:39,787 ERROR [RS_OPEN_PRIORITY_REGION-hostname:16020-1] handler.OpenRegionHandler: Failed open of region=hbase:namespace,,1508913064554.16ee288e7e2f92b959283a91a2205c93., starting to roll back the global memstore size.
org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient permissions for user 'hbase/principal' (action=admin)
at org.apache.ranger.authorization.hbase.AuthorizationSession.publishResults(AuthorizationSession.java:254)
at org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.authorizeAccess(RangerAuthorizationCoprocessor.java:595)
at org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.requirePermission(RangerAuthorizationCoprocessor.java:664)
at org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.preOpen(RangerAuthorizationCoprocessor.java:872)
at org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.preOpen(RangerAuthorizationCoprocessor.java:778)
at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$1.call(RegionCoprocessorHost.java:430)
at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1748)
at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1705)
at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preOpen(RegionCoprocessorHost.java:426)
at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:812)
at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:796)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6356)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6317)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6288)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6244)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6195)
at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:362)
at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:129)
at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748) I have tried by seting following configurations : hbase.master.namespace.init.timeout = 2400000 hbase.regionserver.executor.openregion.threads = 200 But, after 40 minutes the master goes down. Is there something that I am missing? Thanks in advance.
... View more
Labels:
10-27-2017
12:33 PM
I am able to solve the issue. The issue was that my keytab was not proper. I guess the keytab was corrupted, not sure what is the reason. After generating a new keytab, my Storm service is up and running.
... View more
10-27-2017
09:37 AM
I have a Kerberized cluster (HDP version: 2.4.3 and ambari version: 2.2.2.0) in which Storm in installed. Before the cluster was kerberized, Storm service (Nimbus and Supervisor) was running successfully. But, after kerberization, Nimbus and Supervisor fail to start. I checked logs, got the following error: [ERROR] Error on initialization of server service-handler
java.lang.RuntimeException: org.apache.storm.zookeeper.KeeperException$InvalidACLException: KeeperErrorCode = InvalidACL for nimbus Zookeeper service is up and Storm points to right zookeeper hosts. Also, there is no data in storm local directories. The local directory is: /hadoop/storm. It has /nimbus/inbox folder which is empty. Please provide some solution. Thanks.
... View more
Labels:
- Labels:
-
Apache Storm
10-10-2017
10:57 AM
Hi @Mugdha, I am facing same kind of exception when tried to integrate Knox with AD on Kerberized cluster and followed ambari-server setup-security document which is suggested by you but still same exception remains. Log: cat /usr/hdp/current/knox-server/logs/gateway.log ERROR hadoop.gateway (AppCookieManager.java:getAppCookie(126)) - Failed Knox->Hadoop SPNegotiation authentication for URL: http://hostname1:50070/webhdfs/v1/?op=GETHOMEDIRECTORY&doAs=username WARN hadoop.gateway (DefaultDispatch.java:executeOutboundRequest(138)) - Connection exception dispatching request: http://hostname1:50070/webhdfs/v1/?op=GETHOMEDIRECTORY&doAs=username java.io.IOException: SPNego authn failed, can not get hadoop.auth cookie java.io.IOException: SPNego authn failed, can not get hadoop.auth cookie cat /usr/hdp/current/knox-server/conf/topologies/sample5.xml <topology> <gateway><provider> <role>authentication</role> <name>ShiroProvider</name> <enabled>true</enabled> <param name="main.ldapRealm" value="org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm"/> <param name="main.ldapContextFactory" value="org.apache.hadoop.gateway.shirorealm.KnoxLdapContextFactory"/> <param name="main.ldapRealm.contextFactory" value="$ldapContextFactory"/> <param name="main.ldapRealm.contextFactory.url" value="ldaps://abcd123:636"/> <param name="main.ldapRealm.contextFactory.systemUsername" value="testuser"/> <param name="main.ldapRealm.contextFactory.systemPassword" value="testpassword"/> <param name="main.ldapRealm.searchBase" value="DC=org,DC=apache,DC=com"/> <param name="main.ldapRealm.userSearchAttributeName" value="sAMAccountName"/> <param name="main.ldapRealm.userObjectClass" value="person"/> <param name="main.ldapRealm.authorizationEnabled" value="true"/> <param name="main.ldapRealm.groupSearchBase" value="OU=Service Accounts,OU=Applications,DC=org,DC=apache,DC=com"/> <param name="main.ldapRealm.groupObjectClass" value="group"/> <param name="main.ldapRealm.groupIdAttribute" value="sAMAccountName"/> <param name="main.ldapRealm.memberAttribute" value="member"/> <param name="main.cacheManager" value="org.apache.shiro.cache.ehcache.EhCacheManager"/> <param name="main.securityManager.cacheManager" value="$cacheManager"/> <param name="main.ldapRealm.authenticationCachingEnabled" value="true"/> <param name="urls./**" value="authcBasic"/> </provider> <provider> <role>authorization</role> <name>AclsAuthz</name> <enabled>true</enabled> </provider> <provider> <role>identity-assertion</role> <name>Default</name> <enabled>true</enabled> </provider> </gateway> <service> <role>NAMENODE</role> <url>hdfs://hostname1:8020</url> </service> <service> <role>JOBTRACKER</role> <url>rpc://hostname2:8050</url> </service> <service> <role>WEBHDFS</role> <url>http://hostname1:50070/webhdfs</url> </service> <service> <role>WEBHCAT</role> <url>http://hostname1:50111/templeton</url> </service> <service> <role>OOZIE</role> <url>http://hostname3:11000/oozie</url> </service> <service> <role>WEBHBASE</role> <url>http://hostname2:8080</url> </service> <service> <role>HIVE</role> <url>http://hostname1:10001/cliservice</url> </service> <service> <role>RESOURCEMANAGER</role> <url>http://hostname2:8088/ws</url> </service> <service> <role>KNOX</role> <url>hostname1</url> </service> </topology> url1: curl -u username:password -ik 'https://knoxhost:8443/gateway/sample5/api/v1/version' HTTP/1.1 200 OK Set-Cookie: JSESSIONID=123;Path=/gateway/sample5;Secure;HttpOnly Expires: Thu, 01 Jan 1970 00:00:00 GMT Content-Length: 169 Content-Type: application/xml Server: Jetty(8.1.14.v20131031) <?xml version="1.0" encoding="UTF-8"?> <ServerVersion> <version>0.6.0.2.4.3.0-227</version> <hash>12322</hash> </ServerVersion> url2: curl -u username:password -ik 'https://knoxhost:8443/gateway/sample5/webhdfs/v1?op=GETHOMEDIRECTORY'
<html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"/> <title>Error 500 Server Error</title> </head> <body><h2>HTTP ERROR 500</h2> <p>Problem accessing /gateway/sample5/webhdfs/v1. Reason: <pre> Server Error</pre></p><hr /><i><small>Powered by Jetty://</small></i><br/>
... View more
10-06-2017
07:37 AM
@Geoffrey Shelton Okot, Yes, that answers my doubt. Thank you so much for your response.
... View more
10-05-2017
01:47 PM
Hi @Geoffrey Shelton Okot I set my UsearSearchFilter as sAMAccountName={0}. With this I am able to login with AD user. But, I can't see anything in the UI other than Access Manager tab. I am not sure how these permissions are set. Can you please provide some more information on this?
... View more
10-05-2017
09:17 AM
Hi @Geoffrey Shelton Okot I have the proper settings. The authentication method is toggled to AD only. Actually, I am getting following error when I try to login with an AD user. error: "The username or password you entered is incorrect"
... View more
10-05-2017
07:52 AM
Hi, I am trying to login into Ranger UI with active directory users but I am not able to. However, I am able to login with default username:password - admin:admin. The error that I get when I try to login with an AD user is : "The username or password you entered is incorrect". Also, I am able to successfully sync AD users in Ranger, i.e, I am able to see AD users in Users/Groups tab. That means, I am guessing whatever configurations I have done are correct. I think I am missing some configuration for UI login. I am using Ambari version 2.2.2.0 and HDP version 2.4.3 Please suggest some solution. Thanks.
... View more
Labels:
- Labels:
-
Apache Ranger
09-21-2017
11:05 AM
Hi, @mqureshi My mysql process is up. The issue was missing keytabs. Now, my HIVE service is up and running. Thanks for your reply.
... View more
09-21-2017
11:03 AM
Hi @Geoffrey Shelton Okot Thanks for your reply. I saw hiverserver2.log. According to the error messages in the logs the keytabs were missing. After creating keytabs, HIVE service is started successfully.
... View more