Member since
03-22-2016
40
Posts
5
Kudos Received
0
Solutions
10-27-2017
01:13 PM
I have a Kerberized cluster (HDP version: 2.4.3 and Ambari version: 2.2.2.0) in which HBase service is installed. But, the HBase master stops after timeout with following error: hbase master log: 2017-10-27 12:56:56,437 ERROR [Thread-100] master.HMaster: Master failed to complete initialization after 900000ms. Please consider submitting a bug report including a thread dump of this process.
2017-10-27 13:11:56,437 ERROR [Thread-100] master.HMaster: Master failed to complete initialization after 900000ms. Please consider submitting a bug report including a thread dump of this process.
2017-10-27 13:22:23,320 FATAL [hostname:60000.activeMasterManager] master.HMaster: Failed to become active master
java.io.IOException: Timedout 2400000ms waiting for namespace table to be assigned
at org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:104)
at org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:1015)
at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:809)
at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:193)
at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1793)
at java.lang.Thread.run(Thread.java:748)
2017-10-27 13:22:23,321 FATAL [hostname:60000.activeMasterManager] master.HMaster: Master server abort: loaded coprocessors are: [org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor]
2017-10-27 13:22:23,321 FATAL [hostname:60000.activeMasterManager] master.HMaster: Unhandled exception. Starting shutdown.
java.io.IOException: Timedout 2400000ms waiting for namespace table to be assigned
at org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:104)
at org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:1015)
at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:809)
at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:193)
at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1793)
at java.lang.Thread.run(Thread.java:748) regionserver log: 2017-10-27 13:30:39,787 ERROR [RS_OPEN_PRIORITY_REGION-hostname:16020-1] handler.OpenRegionHandler: Failed open of region=hbase:namespace,,1508913064554.16ee288e7e2f92b959283a91a2205c93., starting to roll back the global memstore size.
org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient permissions for user 'hbase/principal' (action=admin)
at org.apache.ranger.authorization.hbase.AuthorizationSession.publishResults(AuthorizationSession.java:254)
at org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.authorizeAccess(RangerAuthorizationCoprocessor.java:595)
at org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.requirePermission(RangerAuthorizationCoprocessor.java:664)
at org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.preOpen(RangerAuthorizationCoprocessor.java:872)
at org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.preOpen(RangerAuthorizationCoprocessor.java:778)
at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$1.call(RegionCoprocessorHost.java:430)
at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1748)
at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1705)
at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preOpen(RegionCoprocessorHost.java:426)
at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:812)
at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:796)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6356)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6317)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6288)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6244)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6195)
at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:362)
at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:129)
at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748) I have tried by seting following configurations : hbase.master.namespace.init.timeout = 2400000 hbase.regionserver.executor.openregion.threads = 200 But, after 40 minutes the master goes down. Is there something that I am missing? Thanks in advance.
... View more
Labels:
10-27-2017
12:33 PM
I am able to solve the issue. The issue was that my keytab was not proper. I guess the keytab was corrupted, not sure what is the reason. After generating a new keytab, my Storm service is up and running.
... View more
10-27-2017
09:37 AM
I have a Kerberized cluster (HDP version: 2.4.3 and ambari version: 2.2.2.0) in which Storm in installed. Before the cluster was kerberized, Storm service (Nimbus and Supervisor) was running successfully. But, after kerberization, Nimbus and Supervisor fail to start. I checked logs, got the following error: [ERROR] Error on initialization of server service-handler
java.lang.RuntimeException: org.apache.storm.zookeeper.KeeperException$InvalidACLException: KeeperErrorCode = InvalidACL for nimbus Zookeeper service is up and Storm points to right zookeeper hosts. Also, there is no data in storm local directories. The local directory is: /hadoop/storm. It has /nimbus/inbox folder which is empty. Please provide some solution. Thanks.
... View more
Labels:
- Labels:
-
Apache Storm
10-10-2017
10:57 AM
Hi @Mugdha, I am facing same kind of exception when tried to integrate Knox with AD on Kerberized cluster and followed ambari-server setup-security document which is suggested by you but still same exception remains. Log: cat /usr/hdp/current/knox-server/logs/gateway.log ERROR hadoop.gateway (AppCookieManager.java:getAppCookie(126)) - Failed Knox->Hadoop SPNegotiation authentication for URL: http://hostname1:50070/webhdfs/v1/?op=GETHOMEDIRECTORY&doAs=username WARN hadoop.gateway (DefaultDispatch.java:executeOutboundRequest(138)) - Connection exception dispatching request: http://hostname1:50070/webhdfs/v1/?op=GETHOMEDIRECTORY&doAs=username java.io.IOException: SPNego authn failed, can not get hadoop.auth cookie java.io.IOException: SPNego authn failed, can not get hadoop.auth cookie cat /usr/hdp/current/knox-server/conf/topologies/sample5.xml <topology> <gateway><provider> <role>authentication</role> <name>ShiroProvider</name> <enabled>true</enabled> <param name="main.ldapRealm" value="org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm"/> <param name="main.ldapContextFactory" value="org.apache.hadoop.gateway.shirorealm.KnoxLdapContextFactory"/> <param name="main.ldapRealm.contextFactory" value="$ldapContextFactory"/> <param name="main.ldapRealm.contextFactory.url" value="ldaps://abcd123:636"/> <param name="main.ldapRealm.contextFactory.systemUsername" value="testuser"/> <param name="main.ldapRealm.contextFactory.systemPassword" value="testpassword"/> <param name="main.ldapRealm.searchBase" value="DC=org,DC=apache,DC=com"/> <param name="main.ldapRealm.userSearchAttributeName" value="sAMAccountName"/> <param name="main.ldapRealm.userObjectClass" value="person"/> <param name="main.ldapRealm.authorizationEnabled" value="true"/> <param name="main.ldapRealm.groupSearchBase" value="OU=Service Accounts,OU=Applications,DC=org,DC=apache,DC=com"/> <param name="main.ldapRealm.groupObjectClass" value="group"/> <param name="main.ldapRealm.groupIdAttribute" value="sAMAccountName"/> <param name="main.ldapRealm.memberAttribute" value="member"/> <param name="main.cacheManager" value="org.apache.shiro.cache.ehcache.EhCacheManager"/> <param name="main.securityManager.cacheManager" value="$cacheManager"/> <param name="main.ldapRealm.authenticationCachingEnabled" value="true"/> <param name="urls./**" value="authcBasic"/> </provider> <provider> <role>authorization</role> <name>AclsAuthz</name> <enabled>true</enabled> </provider> <provider> <role>identity-assertion</role> <name>Default</name> <enabled>true</enabled> </provider> </gateway> <service> <role>NAMENODE</role> <url>hdfs://hostname1:8020</url> </service> <service> <role>JOBTRACKER</role> <url>rpc://hostname2:8050</url> </service> <service> <role>WEBHDFS</role> <url>http://hostname1:50070/webhdfs</url> </service> <service> <role>WEBHCAT</role> <url>http://hostname1:50111/templeton</url> </service> <service> <role>OOZIE</role> <url>http://hostname3:11000/oozie</url> </service> <service> <role>WEBHBASE</role> <url>http://hostname2:8080</url> </service> <service> <role>HIVE</role> <url>http://hostname1:10001/cliservice</url> </service> <service> <role>RESOURCEMANAGER</role> <url>http://hostname2:8088/ws</url> </service> <service> <role>KNOX</role> <url>hostname1</url> </service> </topology> url1: curl -u username:password -ik 'https://knoxhost:8443/gateway/sample5/api/v1/version' HTTP/1.1 200 OK Set-Cookie: JSESSIONID=123;Path=/gateway/sample5;Secure;HttpOnly Expires: Thu, 01 Jan 1970 00:00:00 GMT Content-Length: 169 Content-Type: application/xml Server: Jetty(8.1.14.v20131031) <?xml version="1.0" encoding="UTF-8"?> <ServerVersion> <version>0.6.0.2.4.3.0-227</version> <hash>12322</hash> </ServerVersion> url2: curl -u username:password -ik 'https://knoxhost:8443/gateway/sample5/webhdfs/v1?op=GETHOMEDIRECTORY'
<html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"/> <title>Error 500 Server Error</title> </head> <body><h2>HTTP ERROR 500</h2> <p>Problem accessing /gateway/sample5/webhdfs/v1. Reason: <pre> Server Error</pre></p><hr /><i><small>Powered by Jetty://</small></i><br/>
... View more
10-06-2017
07:37 AM
@Geoffrey Shelton Okot, Yes, that answers my doubt. Thank you so much for your response.
... View more
10-05-2017
01:47 PM
Hi @Geoffrey Shelton Okot I set my UsearSearchFilter as sAMAccountName={0}. With this I am able to login with AD user. But, I can't see anything in the UI other than Access Manager tab. I am not sure how these permissions are set. Can you please provide some more information on this?
... View more
10-05-2017
09:17 AM
Hi @Geoffrey Shelton Okot I have the proper settings. The authentication method is toggled to AD only. Actually, I am getting following error when I try to login with an AD user. error: "The username or password you entered is incorrect"
... View more
10-05-2017
07:52 AM
Hi, I am trying to login into Ranger UI with active directory users but I am not able to. However, I am able to login with default username:password - admin:admin. The error that I get when I try to login with an AD user is : "The username or password you entered is incorrect". Also, I am able to successfully sync AD users in Ranger, i.e, I am able to see AD users in Users/Groups tab. That means, I am guessing whatever configurations I have done are correct. I think I am missing some configuration for UI login. I am using Ambari version 2.2.2.0 and HDP version 2.4.3 Please suggest some solution. Thanks.
... View more
Labels:
- Labels:
-
Apache Ranger
09-21-2017
11:05 AM
Hi, @mqureshi My mysql process is up. The issue was missing keytabs. Now, my HIVE service is up and running. Thanks for your reply.
... View more
09-21-2017
11:03 AM
Hi @Geoffrey Shelton Okot Thanks for your reply. I saw hiverserver2.log. According to the error messages in the logs the keytabs were missing. After creating keytabs, HIVE service is started successfully.
... View more
09-20-2017
07:05 AM
Hi @mqureshi Nothing is running on port 9083. netstat -nlp | grep 9083 gives no output.
... View more
09-20-2017
05:22 AM
Hi , I am using a Kerberized Cluster in which Hive Metastore and Hive Server starts but stops after few minutes. When I check the logs of hive I get following error messages: 2017-09-20 06:06:00,514 ERROR [main]: metastore.HiveMetaStore (HiveMetaStore.java:main(5946)) - Metastore Thrift Server threw an exception...
org.apache.thrift.transport.TTransportException: Could not create ServerSocket on address 0.0.0.0/0.0.0.0:9083.
at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:109)
at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:91)
at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:83)
at org.apache.hadoop.hive.metastore.TServerSocketKeepAlive.<init>(TServerSocketKeepAlive.java:34)
at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:6001)
at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:5942)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
2017-09-20 06:06:00,520 INFO [Thread-4]: metastore.HiveMetaStore (HiveMetaStore.java:run(5931)) - Shutting down hive metastore.
2017-09-20 06:07:00,165 ERROR [pool-4-thread-200]: server.TThreadPoolServer (TThreadPoolServer.java:run(294)) - Thrift error occurred during processing of message.
org.apache.thrift.protocol.TProtocolException: Missing version in readMessageBegin, old client?
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:228)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:75)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
2017-09-20 06:07:01,631 ERROR [pool-4-thread-200]: server.TThreadPoolServer (TThreadPoolServer.java:run(294)) - Thrift error occurred during processing of message.
org.apache.thrift.protocol.TProtocolException: Missing version in readMessageBegin, old client?
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:228)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:75)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748) Please help.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hive
09-06-2017
01:20 PM
Hi @Pravin Bhagade, This solved my issue. I had not imported the SSL certificate in the keystore. After doing this, the AD authentication works. Thank you so much.
... View more
09-06-2017
10:57 AM
Hi Vipin, I tried with same configurations(With HDP 2.5.5,Zeppelin version 0.6.0.2.5.5.0-157)but I got the below exception. ERROR LoginRestApi.java[postLogin]:103) - Exception in login: org.apache.shiro.authc.AuthenticationException: LDAP naming error while attempting to authenticate user. at org.apache.shiro.realm.ldap.AbstractLdapRealm.doGetAuthenticationInfo(AbstractLdapRealm.java:197) Caused by: javax.naming.CommunicationException: simple bind failed: <server>:636 [Root exception is javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target] Advanced zeppelin-config: zeppelin.anonymous.allowed=false Advanced zeppelin-env: shiro_ini_content: [users] # List of users with their password allowed to access Zeppelin. # To use a different strategy (LDAP / Database / ...) check the shiro doc at http://shiro.apache.org/configuration.html#Configuration-INISections #admin = password1 #user1 = password2, role1, role2 #user2 = password3, role3 #user3 = password4, role2 # Sample LDAP configuration, for user Authentication, currently tested for single Realm [main] activeDirectoryRealm = org.apache.shiro.realm.activedirectory.ActiveDirectoryRealm activeDirectoryRealm.systemUsername = CN=<systemusername>,OU=<VALUE>,OU=<VALUE>,DC=<VALUE>,DC=<VALUE>,DC=<VALUE> activeDirectoryRealm.systemPassword = <systempassword> #activeDirectoryRealm.hadoopSecurityCredentialPath= jceks://user/zeppelin/conf/zeppelin.jceks activeDirectoryRealm.searchBase = OU=<VALUE>,OU=<VALUE>,DC=<VALUE>,DC=<VALUE>,DC=<VALUE> activeDirectoryRealm.url = ldaps://<VALUE>:636 activeDirectoryRealm.authorizationCachingEnabled = false sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager securityManager.sessionManager = $sessionManager securityManager.realms = $activeDirectoryRealm # 86,400,000 milliseconds = 24 hour securityManager.sessionManager.globalSessionTimeout = 86400000 shiro.loginUrl = /api/login [roles] [urls] /api/version = anon #/** = anon /** = authc
... View more
07-17-2017
02:09 PM
I found the issue. My Ranger admin and ranger database reside on different nodes. I was giving the database host instead of Ranger admin host in "policymgr_external_url" property. Correcting it solved the issue. Thanks for your reply.
... View more
07-14-2017
07:13 AM
@Deepak Sharma Thanks for your reply. Following are my usersync configs: Sync Source: LDAP/AD LDAP/AD URL: ldaps://<server>:636 Authentication method: ACTIVE_DIRECTORY Username Attribute: cn User Object Class: user User Search Filter: cn=* User Search Scope: sub User Group Name Attribute: memberof Group Member Attribute: member Group Name Attribute: cn Group Object Class: group Group Search Filter: cn=* Also, these configs worked with a different Ranger that I had configured before wit the same LDAP cert file. But now I don't understand what the issue is.
... View more
07-13-2017
11:45 AM
1 Kudo
Hi, I am trying to do Ranger AD usersync (HDP version: 2.4.3 and Ambari version: 2.2.2.0) . When I try to do it manually from Ambari UI or pass the configuration through the blueprint, I get following error: ERROR UserGroupSync [UnixUserSyncThread] - Failed to initialize UserGroup source/sink. Will retry after 3600000 milliseconds. Error details: com.sun.jersey.api.client.ClientHandlerException: java.net.ConnectException: Connection refused (Connection refused) Not sure, where the problem is. Ping and telnet to the AD server is also successful. Also, LDAP cert was loaded into a trustStore using following command: keytool -import -trustcacerts -alias myldap1 -file mycertfile.pem -keystore /etc/pki/java/cacerts Any solution please? Attaching usersync.log file.usersync.txt
... View more
Labels:
- Labels:
-
Apache Ranger
02-02-2017
12:00 PM
Hi @vsuvagia I have overridden the property "ranger.externalurl". Now, when I try to restart hdfs service, it doesnt start due to "Connection to Ranger Admin failed" I suppose it is not able to contact the Ranger set in "ranger.externalurl". Do you find anything odd in this?
... View more
02-02-2017
10:45 AM
@vsuvagia Yes, I need to apply different policies across different clusters as the cluster names will be different. I am confused with how Ambari of other clusters will identify this Ranger instance?
... View more
02-02-2017
07:17 AM
@vsuvagia hey thanks for your reply. In that case, do we need to create a repository in Ranger for every cluster? Say, if I have hdfs-plugin enabled on 2 clusters, then 2 hdfs repositories will be created in Ranger. Is my understanding correct?
... View more
02-02-2017
06:54 AM
Hi, is it possible to manage multiple clusters with only one single ranger? If so, what will be the configurations in ambari and ranger?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Ranger
10-27-2016
05:00 AM
@vperiasamy Yes, I understand from @Deepak Sharma and @Terry Stebbens that Hive Ranger plugin works with Beeline and not Hive CLI.
... View more
10-27-2016
04:59 AM
Right now I am trying with Hive CLI as I am familiar with it. So, is it that the Ranger hive plugin won't work with Hive CLI at all?
... View more
10-26-2016
10:11 AM
I have attached screenshot for hive audit. In this only "USE' access type audits are displayed for servicetype=Hive
... View more
10-26-2016
09:42 AM
Hi, I am trying to apply Ranger policies for Hive. I have created a policy but it seems that the policy is not applied. The audit logs that are shown in Ranger-> Audit are also confusing. I am trying to execute queries from Hive CLI. I have a database called 'employee'. I have created a table empdetails having columns empno, empname and salary. When I query 'select empno from empdetails' , it still shows me all the records as the policy states only 'empname' must be accessible by user 'mohang'. It would be helpful if some one can provide some solution and suggestions. Attached are the screenshots. Thanks.
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Ranger
10-26-2016
09:20 AM
@Hari Rongali I set mapreduce.job.queuename=<queue name> and it works. Thanks a lot for your answer.
... View more
10-25-2016
06:13 AM
Hi, I have setup Hive in my cluster. When I try to enter the Hive shell, I get the following error: WARNING: Use "yarn jar" to launch YARN applications.
Logging initialized using configuration in file:/etc/hive/2.3.4.7-4/0/hive-log4j.properties
Exception in thread "main" java.lang.RuntimeException: org.apache.tez.dag.api.TezException: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1477290706349_0010 to YARN : Application application_1477290706349_0010 submitted by user hive to unknown queue: default I understand that it is trying to submit to some an unknown queue 'default' which perhaps does not exist. I have some queues in the Yarn Resource Manager whose screenshot I have attached. It will be very helpful if anyone can guide on how I can hit the existing queue. Are there any configuration changes that need to be made? Thanks.
... View more
- Tags:
- Hadoop Core
- Hive
- queue
Labels:
- Labels:
-
Apache Hive