Member since 
    
	
		
		
		05-10-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                184
            
            
                Posts
            
        
                60
            
            
                Kudos Received
            
        
                6
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 5026 | 05-06-2017 10:21 PM | |
| 5143 | 05-04-2017 08:02 PM | |
| 6375 | 12-28-2016 04:49 PM | |
| 1620 | 11-11-2016 08:09 PM | |
| 4142 | 10-22-2016 03:03 AM | 
			
    
	
		
		
		10-05-2018
	
		
		02:59 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Saravana V These are all INFO messages, makes me believe that since additional datanodes have been added, rebalance is taking place. See if you can find any exception in the HDFS logs. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-21-2018
	
		
		09:27 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Environment   CentOS 7  Ambari 2.7.0.0       Error:  After ambari-server setup, and deploying the desired configuration via Ambari, found this error in the ambari-server.log.   
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px 'Courier New'; color: #28fe14; background-color: #000000; background-color: rgba(0, 0, 0, 0.9)}
span.s1 {font-variant-ligatures: no-common-ligatures}
span.s2 {font-variant-ligatures: no-common-ligatures; color: #000000; background-color: rgba(40, 254, 20, 0.9)}
2018-08-21 20:09:05,733  WARN [ambari-client-thread-36] HttpChannel:507 - /api/v1/clusters//requests/3
org.springframework.security.web.firewall.RequestRejectedException: The request was rejected because the URL was not normalized.
        at org.springframework.security.web.firewall.StrictHttpFirewall.getFirewalledRequest(StrictHttpFirewall.java:123)
        at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:193)
        at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:177)
        at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:347)
        at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:263)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1621)
        at org.apache.ambari.server.api.MethodOverrideFilter.doFilter(MethodOverrideFilter.java:73)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1621)
        at org.apache.ambari.server.api.AmbariPersistFilter.doFilter(AmbariPersistFilter.java:53)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1621)
        at org.apache.ambari.server.security.AbstractSecurityHeaderFilter.doFilter(AbstractSecurityHeaderFilter.java:130)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1621)
        at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:51)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1621)
        at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:541)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
        at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
        at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:190)
        at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1592)
        at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
        at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1239)
        at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
        at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:481)
        at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1561)
        at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)
        at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1141)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
        at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:561)
        at org.apache.ambari.server.controller.AmbariHandlerList.processHandlers(AmbariHandlerList.java:221)
        at org.apache.ambari.server.controller.AmbariHandlerList.processHandlers(AmbariHandlerList.java:210)
        at org.apache.ambari.server.controller.AmbariHandlerList.handle(AmbariHandlerList.java:140)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
        at org.eclipse.jetty.server.Server.handle(Server.java:564)
        at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
        at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
        at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:279)
        at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:110)
        at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:124)
        at org.eclipse.jetty.util.thread.Invocable.invokePreferred(Invocable.java:122)
        at org.eclipse.jetty.util.thread.strategy.ExecutingExecutionStrategy.invoke(ExecutingExecutionStrategy.java:58)
        at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:201)
        at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:133)
        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:672)
        at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:590)
        at java.lang.Thread.run(Thread.java:745)
  Ambari UI would be stuck at something like this:      Workaround   Try the "retry" option, for me it went past the "waiting" state and through to the next error, which is a know issue   File "/usr/lib/ambari-agent/lib/resource_management/core/source.py", line 197, in get_content    raise Fail("Failed to download file from {0} due to HTTP error: {1}".format(self.url, str(ex)))resource_management.core.exceptions.Fail: Failed to download file from http://xlhive3.openstacklocal:8080/resources/mysql-connector-java.jar due to HTTP error: HTTP Error 404: Not Found   Try downloading and appropriately naming the mysql-connector jar. If that doesn't work, the copy the jar under /var/lib/ambari-server/resources  Restart ambari-server and retry the setup  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		05-01-2018
	
		
		04:39 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Dmitro Vasilenko   Either consider increasing the lock time out for the backend metastore DB, try using HiveServer2 (it uses embedded metastore). OR just try increasing "hive.lock.sleep.between.retries" to 120s and hive.lock.numretries to a higher value.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-27-2018
	
		
		02:07 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Problem  While using LLAP, (can happen anytime, even though it was working a second ago), you might see a MetaException complaining about incorrect MySQL server syntax. Here is the the error output with 10 retries by default, so this could get lengthy.  0: jdbc:hive2://xlautomation-1.h.c:10500/defa> analyze table L3_MONTHLY_dw_dimRisk compute statistics for columns;
Getting log thread is interrupted, since query is done!
Error: Error while compiling statement: FAILED: RuntimeException Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient (state=42000,code=40000)
org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: RuntimeException Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
	at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:277)
	at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:263)
	at org.apache.hive.jdbc.HiveStatement.runAsyncOnServer(HiveStatement.java:303)
	at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:244)
	at org.apache.hive.beeline.Commands.execute(Commands.java:871)
	at org.apache.hive.beeline.Commands.sql(Commands.java:729)
	at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1000)
	at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:835)
	at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:793)
	at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:493)
	at org.apache.hive.beeline.BeeLine.main(BeeLine.java:476)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
Caused by: org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: RuntimeException Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
	at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:376)
	at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:193)
	at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:278)
	at org.apache.hive.service.cli.operation.Operation.run(Operation.java:312)
	at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:517)
	at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:504)
	at org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:310)
	at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:509)
	at org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1497)
	at org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1482)
	at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
	at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
	at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
	at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
	at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1667)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:83)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:133)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
	at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3627)
	at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3679)
	at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3659)
	at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:465)
	at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:358)
	at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1300)
	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1272)
	at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:191)
	... 15 more
Caused by: java.lang.RuntimeException: java.lang.reflect.InvocationTargetException:null
	at sun.reflect.GeneratedConstructorAccessor105.newInstance(Unknown Source)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1665)
	... 26 more
Caused by: MetaException(message:javax.jdo.JDODataStoreException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_SELECT_LIMIT=DEFAULT' at line 1
	at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543)
	at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:388)
	at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:213)
	at org.apache.hadoop.hive.metastore.ObjectStore.getObjectCount(ObjectStore.java:1294)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionCount(ObjectStore.java:1277)
	at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101)
	at com.sun.proxy.$Proxy28.getPartitionCount(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.updateMetrics(HiveMetaStore.java:6960)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:451)
	at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:79)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:92)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:7034)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:140)
	at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
	at sun.reflect.GeneratedConstructorAccessor105.newInstance(Unknown Source)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1665)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:83)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:133)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
	at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3627)
	at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3679)
	at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3659)
	at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:465)
	at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:358)
	at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1300)
	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1272)
	at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:191)
	at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:278)
	at org.apache.hive.service.cli.operation.Operation.run(Operation.java:312)
	at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:517)
	at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:504)
	at org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:310)
	at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:509)
	at org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1497)
	at org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1482)
	at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
	at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
	at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
	at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)      Assumptions   We are using mysql as backed Metastore DB  Error shows up only while running LLAP  HS2 works fine  Issue encountered on HDP-2.6.4 distro   Issue   Known bug with older version of mysql-connector https://bugs.mysql.com/bug.php?id=66659. This seems to have been addressed in version "mysql-connector-java-5.1.26" and above.   Solution(s)   Use a new mysql-connector version compatible with your Mysql backend DB version  If you do not see issues with HS2, then copy the mysql-connector JAR on this host where HSI (HiveServerInteractive) is installed, in this location "/usr/hdp/2.6.4.0-91/hive2/lib/". There is no softlink so it should be renamed to "mysql-connector-java.jar"   Validating the version   On host which serves HiveServer2 instance, look for mysql-connector*   [root@xlautomation-2 ~]# find /usr/hdp/2.6.4.0-91/ -name "mysql-connector*"
/usr/hdp/2.6.4.0-91/hive2/lib/mysql-connector-java-5.1.45-bin.jar
/usr/hdp/2.6.4.0-91/hive2/lib/mysql-connector-java.jar
/usr/hdp/2.6.4.0-91/hive/lib/mysql-connector-java.jar
     Use the jar -xvf command by copying the files to a temp directory, should you want to validate the version. You should be able to do so, using:   [root@xlautomation-1]# mkdir abc
[root@xlautomation-1]# mv mysql-connector-java.jar ./abc
[root@xlautomation-1]# /usr/jdk64/jdk1.8.0_112/bin/jar -xvf mysql-connector-java.jar
[root@xlautomation-1]# cd META-INF/
[root@xlautomation-1]# head /tmp/abc/META-INF/MANIFEST.MF
Manifest-Version: 1.0
Ant-Version: Apache Ant 1.7.1
Created-By: 4.4.6 20120305 (Red Hat 4.4.6-4) (Free Software Foundation
 , Inc.)
Built-By: mockbuild
Bundle-Vendor: Sun Microsystems Inc.
Bundle-Classpath: .
Bundle-Version: 5.1.17
Bundle-Name: Sun Microsystems' JDBC Driver for MySQL
Bundle-ManifestVersion: 2
     Default MySQL connector for HiveServer2 comes with a higher version, you can validate this using   [root@xlautomation-1 META-INF]# head /tmp/bcd/META-INF/MANIFEST.MF
Manifest-Version: 1.0
Ant-Version: Apache Ant 1.8.2
Created-By: 1.8.0_92-b14 (Oracle Corporation)
Built-By: pb2user
Specification-Title: JDBC
Specification-Version: 4.2
Specification-Vendor: Oracle Corporation
Implementation-Title: MySQL Connector Java
Implementation-Version: 5.1.45
Implementation-Vendor-Id: com.mysql
 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		02-22-2018
	
		
		10:00 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Salvatore Matino Where exactly have you copied the JAR ? Also, can you check which JAR is being picked by looking at the classpath by sqoop service ? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-22-2018
	
		
		08:03 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 
	@Ajay R You should revisit your statement. It clearly points out "jun13_miles" as either an invalid table or alias. Make sure you are not missing any columns in your SELECT statement based on the tutorial here 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-02-2018
	
		
		08:01 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Issue  Soon after starting Druid services, coordinator keeps going down.  This is mostly experienced on a new setup. The exception in the coordinator log looks like this  2018-02-02T00:37:24,123 ERROR [main] io.druid.cli.CliCoordinator - Error when starting up.  Failing.
org.skife.jdbi.v2.exceptions.CallbackFailedException: org.skife.jdbi.v2.exceptions.UnableToExecuteStatementException: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'druid.druid_rules' doesn't exist [statement:"SELECT id from druid_rules where datasource=:dataSource", located:"SELECT id from druid_rules where datasource=:dataSource", rewritten:"SELECT id from druid_rules where datasource=?", arguments:{ positional:{}, named:{dataSource:'_default'}, finder:[]}]
        at org.skife.jdbi.v2.DBI.withHandle(DBI.java:284) ~[jdbi-2.63.1.jar:2.63.1]
        at io.druid.metadata.SQLMetadataRuleManager.createDefaultRule(SQLMetadataRuleManager.java:83) ~[druid-server-0.10.1.2.6.4.0-91.jar:0.10.1.2.6.4.0-91]
        at io.druid.metadata.SQLMetadataRuleManagerProvider$1.start(SQLMetadataRuleManagerProvider.java:72) ~[druid-server-0.10.1.2.6.4.0-91.jar:0.10.1.2.6.4.0-91]
        at io.druid.java.util.common.lifecycle.Lifecycle.start(Lifecycle.java:263) ~[java-util-0.10.1.2.6.4.0-91.jar:0.10.1.2.6.4.0-91]
        at io.druid.guice.LifecycleModule$2.start(LifecycleModule.java:156) ~[druid-api-0.10.1.2.6.4.0-91.jar:0.10.1.2.6.4.0-91]
        at io.druid.cli.GuiceRunnable.initLifecycle(GuiceRunnable.java:103) [druid-services-0.10.1.2.6.4.0-91.jar:0.10.1.2.6.4.0-91]
        at io.druid.cli.ServerRunnable.run(ServerRunnable.java:41) [druid-services-0.10.1.2.6.4.0-91.jar:0.10.1.2.6.4.0-91]
        at io.druid.cli.Main.main(Main.java:108) [druid-services-0.10.1.2.6.4.0-91.jar:0.10.1.2.6.4.0-91]
Caused by: org.skife.jdbi.v2.exceptions.UnableToExecuteStatementException: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'druid.druid_rules' doesn't exist [statement:"SELECT id from druid_rules where datasource=:dataSource", located:"SELECT id from druid_rules where datasource=:dataSource", rewritten:"SELECT id from druid_rules where datasource=?", arguments:{ positional:{}, named:{dataSource:'_default'}, finder:[]}]
        at org.skife.jdbi.v2.SQLStatement.internalExecute(SQLStatement.java:1334) ~[jdbi-2.63.1.jar:2.63.1]
        at org.skife.jdbi.v2.Query.fold(Query.java:173) ~[jdbi-2.63.1.jar:2.63.1]
        at org.skife.jdbi.v2.Query.list(Query.java:82) ~[jdbi-2.63.1.jar:2.63.1]
        at org.skife.jdbi.v2.Query.list(Query.java:75) ~[jdbi-2.63.1.jar:2.63.1]
        at io.druid.metadata.SQLMetadataRuleManager$1.withHandle(SQLMetadataRuleManager.java:97) ~[druid-server-0.10.1.2.6.4.0-91.jar:0.10.1.2.6.4.0-91]
        at io.druid.metadata.SQLMetadataRuleManager$1.withHandle(SQLMetadataRuleManager.java:85) ~[druid-server-0.10.1.2.6.4.0-91.jar:0.10.1.2.6.4.0-91]
        at org.skife.jdbi.v2.DBI.withHandle(DBI.java:281) ~[jdbi-2.63.1.jar:2.63.1]  Cause  Possible cause for the issue is that druid database may have been created manually in mysql and uses "latin1" as the default character set.   Resolution  This is a two-step resolution.    Change the character set in the backend db to "utf8"    mysql> alter database druid character set utf8 collate utf8_general_ci;
Query OK, 1 row affected (0.01 sec)mysql> use druid;
Database changed mysql> show variables like "character_set_database";
+------------------------+-------+
| Variable_name          | Value |
+------------------------+-------+
| character_set_database | utf8  |
+------------------------+-------+
1 row in set (0.02 sec)   Once you restart Druid coordinator post this change, it may start up, however, you might still get similar error (like tables missing OR stack similar to this   2018-02-02T19:42:06,305 WARN [main] io.druid.java.util.common.RetryUtils - Failed on try 9, retrying in 51,342ms.
org.skife.jdbi.v2.exceptions.CallbackFailedException: org.skife.jdbi.v2.exceptions.UnableToExecuteStatementException: Could not clean up [statement:"SELECT @@character_set_database = 'utf8'", located:"SELECT @@character_set_database = 'utf8'", rewritten:"SELECT @@character_set_database = 'utf8'", arguments:{ positional:{}, named:{}, finder:
[]}]
        at org.skife.jdbi.v2.DBI.withHandle(DBI.java:284) ~[jdbi-2.63.1.jar:2.63.1]
        at io.druid.metadata.SQLMetadataConnector$2.call(SQLMetadataConnector.java:130) ~[druid-server-0.10.1.2.6.4.0-91.jar:0.10.1.2.6.4.0-91]
        at io.druid.java.util.common.RetryUtils.retry(RetryUtils.java:63) [java-util-0.10.1.2.6.4.0-91.jar:0.10.1.2.6.4.0-91]
        at io.druid.java.util.common.RetryUtils.retry(RetryUtils.java:81) [java-util-0.10.1.2.6.4.0-91.jar:0.10.1.2.6.4.0-91]
        at io.druid.metadata.SQLMetadataConnector.retryWithHandle(SQLMetadataConnector.java:134) [druid-server-0.10.1.2.6.4.0-91.jar:0.10.1.2.6.4.0-91]
        at io.druid.metadata.SQLMetadataConnector.retryWithHandle(SQLMetadataConnector.java:143) [druid-server-0.10.1.2.6.4.0-91.jar:0.10.1.2.6.4.0-91]
        at io.druid.metadata.SQLMetadataConnector.createTable(SQLMetadataConnector.java:184) [druid-server-0.10.1.2.6.4.0-91.jar:0.10.1.2.6.4.0-91]
        at io.druid.metadata.SQLMetadataConnector.createRulesTable(SQLMetadataConnector.java:282) [druid-server-0.10.1.2.6.4.0-91.jar:0.10.1.2.6.4.0-91]
        at io.druid.metadata.SQLMetadataConnector.createRulesTable(SQLMetadataConnector.java:476) [druid-server-0.10.1.2.6.4.0-91.jar:0.10.1.2.6.4.0-91]
        at io.druid.metadata.SQLMetadataRuleManagerProvider$1.start(SQLMetadataRuleManagerProvider.java:71) [druid-server-0.10.1.2.6.4.0-91.jar:0.10.1.2.6.4.0-91]
        at io.druid.java.util.common.lifecycle.Lifecycle.start(Lifecycle.java:263) [java-util-0.10.1.2.6.4.0-91.jar:0.10.1.2.6.4.0-91]
        at io.druid.guice.LifecycleModule$2.start(LifecycleModule.java:156) [druid-api-0.10.1.2.6.4.0-91.jar:0.10.1.2.6.4.0-91]
        at io.druid.cli.GuiceRunnable.initLifecycle(GuiceRunnable.java:103) [druid-services-0.10.1.2.6.4.0-91.jar:0.10.1.2.6.4.0-91]
        at io.druid.cli.ServerRunnable.run(ServerRunnable.java:41) [druid-services-0.10.1.2.6.4.0-91.jar:0.10.1.2.6.4.0-91]
        at io.druid.cli.Main.main(Main.java:108) [druid-services-0.10.1.2.6.4.0-91.jar:0.10.1.2.6.4.0-91]
Caused by: org.skife.jdbi.v2.exceptions.UnableToExecuteStatementException: Could not clean up [statement:"SELECT @@character_set_database = 'utf8'", located:"SELECT @@character_set_database = 'utf8'", rewritten:"SELECT @@character_set_database = 'utf8'", arguments:{ positional:{}, named:{}, finder:[]}]
        at org.skife.jdbi.v2.BaseStatement.cleanup(BaseStatement.java:105) ~[jdbi-2.63.1.jar:2.63.1]
        at org.skife.jdbi.v2.Query.fold(Query.java:191) ~[jdbi-2.63.1.jar:2.63.1]
        at org.skife.jdbi.v2.Query.first(Query.java:273) ~[jdbi-2.63.1.jar:2.63.1]
        at org.skife.jdbi.v2.Query.first(Query.java:264) ~[jdbi-2.63.1.jar:2.63.1]
        at io.druid.metadata.storage.mysql.MySQLConnector.tableExists(MySQLConnector.java:101) ~[?:?]
        at io.druid.metadata.SQLMetadataConnector$4.withHandle(SQLMetadataConnector.java:190) ~[druid-server-0.10.1.2.6.4.0-91.jar:0.10.1.2.6.4.0-91]
        at io.druid.metadata.SQLMetadataConnector$4.withHandle(SQLMetadataConnector.java:186) ~[druid-server-0.10.1.2.6.4.0-91.jar:0.10.1.2.6.4.0-91]
        at org.skife.jdbi.v2.DBI.withHandle(DBI.java:281) ~[jdbi-2.63.1.jar:2.63.1]
        ... 14 more   The issue is Druid uses a slightly older JDBC jar (5.1.14)   /usr/hdp/current/druid-coordinator/extensions/mysql-metadata-storage/mysql-jdbc-driver.jar   Download 5.1.45 from here and replace it with existing mysql-jdbc-driver.jar   cp mysql-connector-java-5.1.45-bin.jar /usr/hdp/2.6.4.0-91/druid/extensions/mysql-metadata-storage/mysql-jdbc-driver.jar   Restarting druid coordinator service post this change should work.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		01-26-2018
	
		
		11:01 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 Simple illustration of locking in Hive when ACID is enabled  Considerations for illustration   Cluster Version: HDP -2.5.6.0  Hive Version: Hive 1.2.1000  Enabled with following properties in place
  hive.support.concurrency=true  hive.compactor.initiator.on=true  hive.compactor.worker.threads=1  hive.exec.dynamic.partition.mode=nonstrict  hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager (if non-Ambari cluster)     Types of Supported Locks  
 S = SHARED or SHARED_READ  X = EXCLUSIVE   Tables used for testing   orc_tab (ORC format table with col1 int and col2 string), non-transactional   orc_tab_bucketed(ORC format table with col1 int and col2 string, transactional)  txt_tab (TEXT format table with col1 int, col2 string, non-transactional, for loading purposes)  Either tables have closed to 5 GB data on a Single node cluster           SCENARIO 1 (Non Transactional Table)   SELECT blocks ALTER  SELECT starts first followed by ALTER    Session A  beeline -u "jdbc:hive2://localhost:10000/default" -n hive -p '' -e "select col1 from orc_tab order by col1 limit 2"  Session B  beeline -u "jdbc:hive2://localhost:10000/default" -n hive -p '' -e "ALTER TABLE orc_tab ADD COLUMNS (col3 string)"  Session C   +----------+-----------+----------+------------+-------------+---------------+--------------+-----------------+-----------------+----------------+-------+-------------------------+-----------------------------------------------------------+--+
|  lockid  | database  |  table   | partition  | lock_state  |  blocked_by   |  lock_type   | transaction_id  | last_heartbeat  |  acquired_at   | user  |        hostname         |                        agent_info                         |
+----------+-----------+----------+------------+-------------+---------------+--------------+-----------------+-----------------+----------------+-------+-------------------------+-----------------------------------------------------------+--+
| Lock ID  | Database  | Table    | Partition  | State       | Blocked By    | Type         | Transaction ID  | Last Heartbeat  | Acquired At    | User  | Hostname                | Agent Info                                                |
| 31.1     | default   | orc_tab  | NULL       | ACQUIRED    |               | SHARED_READ  | NULL            | 1517003062122   | 1517003062122  | hive  | xlpatch.openstacklocal  | hive_20180126214422_aaeb4b28-5170-4131-b509-ef0213c8b842  |
| 32.1     | default   | orc_tab  | NULL       | WAITING     | 31.1          | EXCLUSIVE    | NULL            | 1517003063314   | NULL           | hive  | xlpatch.openstacklocal  | hive_20180126214422_a65af104-05d1-4c19-ab54-7bb37b4cdbfa  |
+----------+-----------+----------+------------+-------------+---------------+--------------+-----------------+-----------------+----------------+-------+-------------------------+-----------------------------------------------------------+--+          SCENARIO 2 (Non Transactional Table)   SELECT blocks INSERT OVERWRITE  SELECT starts first followed by INSERT OVERWRITE   Session A  beeline -u "jdbc:hive2://localhost:10000/default" -n hive -p '' -e "select col1 from orc_tab order by col1 limit 2"  Session B  beeline -u "jdbc:hive2://localhost:10000/default" -n hive -p '' -e "INSERT OVERWRITE TABLE orc_tab SELECT col1,col2 from txt_tab"  Session C  0: jdbc:hive2://localhost:10000/default> show locks;+----------+-----------+----------+------------+-------------+---------------+--------------+-----------------+-----------------+----------------+-------+-------------------------+-----------------------------------------------------------+--+|  lockid  | database  |  table  | partition  | lock_state  |  blocked_by  |  lock_type  | transaction_id  | last_heartbeat  |  acquired_at  | user  |  hostname  |  agent_info  |+----------+-----------+----------+------------+-------------+---------------+--------------+-----------------+-----------------+----------------+-------+-------------------------+-----------------------------------------------------------+--+| Lock ID  | Database  | Table  | Partition  | State  | Blocked By  | Type  | Transaction ID  | Last Heartbeat  | Acquired At  | User  | Hostname  | Agent Info  || 36.1  | default  | orc_tab  | NULL  | ACQUIRED  |  | SHARED_READ  | NULL  | 1517003567582  | 1517003567582  | hive  | xlpatch.openstacklocal  | hive_20180126215247_7537e30b-d5bf-4fc8-aa23-8e860efe1ac8  || 37.1  | default  | txt_tab  | NULL  | WAITING  |  | SHARED_READ  | NULL  | 1517003568897  | NULL  | hive  | xlpatch.openstacklocal  | hive_20180126215248_875685ed-a552-4009-892c-e13c61cf7eb5  || 37.2  | default  | orc_tab  | NULL  | WAITING  | 36.1  | EXCLUSIVE  | NULL  | 1517003568897  | NULL  | hive  | xlpatch.openstacklocal  | hive_20180126215248_875685ed-a552-4009-892c-e13c61cf7eb5  |+----------+-----------+----------+------------+-------------+---------------+--------------+-----------------+-----------------+----------------+-------+-------------------------+-----------------------------------------------------------+--+          SCENARIO 3 (Non Transactional Table)   SELECT blocks INSERT   SELECT starts first followed by INSERT   Session A  beeline -u "jdbc:hive2://localhost:10000/default" -n hive -p '' -e "select col1 from orc_tab order by col1 limit 2"  Session B  beeline -u "jdbc:hive2://localhost:10000/default" -n hive -p '' -e "INSERT INTO  orc_tab SELECT col1,col2 from txt_tab limit 20"  Session C  0: jdbc:hive2://localhost:10000/default> show locks;+----------+-----------+----------+------------+-------------+---------------+--------------+-----------------+-----------------+----------------+-------+-------------------------+-----------------------------------------------------------+--+|  lockid  | database  |  table  | partition  | lock_state  |  blocked_by  |  lock_type  | transaction_id  | last_heartbeat  |  acquired_at  | user  |  hostname  |  agent_info  |+----------+-----------+----------+------------+-------------+---------------+--------------+-----------------+-----------------+----------------+-------+-------------------------+-----------------------------------------------------------+--+| Lock ID  | Database  | Table  | Partition  | State  | Blocked By  | Type  | Transaction ID  | Last Heartbeat  | Acquired At  | User  | Hostname  | Agent Info  || 38.1  | default  | orc_tab  | NULL  | ACQUIRED  |  | SHARED_READ  | NULL  | 1517004119030  | 1517004119030  | hive  | xlpatch.openstacklocal  | hive_20180126220158_775842e7-5e34-42d0-b574-874076fd5204  || 39.1  | default  | txt_tab  | NULL  | WAITING  |  | SHARED_READ  | NULL  | 1517004120971  | NULL  | hive  | xlpatch.openstacklocal  | hive_20180126220200_9e9eeb8c-9c32-42fd-8ddf-c96f08699224  || 39.2  | default  | orc_tab  | NULL  | WAITING  | 38.1  | EXCLUSIVE  | NULL  | 1517004120971  | NULL  | hive  | xlpatch.openstacklocal  | hive_20180126220200_9e9eeb8c-9c32-42fd-8ddf-c96f08699224  |+----------+-----------+----------+------------+-------------+---------------+--------------+-----------------+-----------------+----------------+-------+-------------------------+-----------------------------------------------------------+--+4 rows selected (0.028 seconds)          SCENARIO 4 (Transactional Table)   SELECT does not block INSERT   SELECT starts first followed by INSERT   Session A  beeline -u "jdbc:hive2://localhost:10000/default" -n hive -p '' -e "select col1 from orc_tab_bucketed order by col1 limit 2"  Session B  beeline -u "jdbc:hive2://localhost:10000/default" -n hive -p '' -e "INSERT INTO orc_tab_bucketed SELECT col1,col2 from txt_tab limit 20"  Session C  0: jdbc:hive2://localhost:10000/default> show locks;+----------+-----------+-------------------+------------+-------------+---------------+--------------+-----------------+-----------------+----------------+-------+-------------------------+-----------------------------------------------------------+--+|  lockid  | database  |  table  | partition  | lock_state  |  blocked_by  |  lock_type  | transaction_id  | last_heartbeat  |  acquired_at  | user  |  hostname  |  agent_info  |+----------+-----------+-------------------+------------+-------------+---------------+--------------+-----------------+-----------------+----------------+-------+-------------------------+-----------------------------------------------------------+--+| Lock ID  | Database  | Table  | Partition  | State  | Blocked By  | Type  | Transaction ID  | Last Heartbeat  | Acquired At  | User  | Hostname  | Agent Info  || 42.1  | default  | orc_tab_bucketed  | NULL  | ACQUIRED  |  | SHARED_READ  | NULL  | 1517004495025  | 1517004495025  | hive  | xlpatch.openstacklocal  | hive_20180126220814_cae3893a-8e97-49eb-8b07-a3a60c4a6dc2  || 43.1  | default  | txt_tab  | NULL  | ACQUIRED  |  | SHARED_READ  | 3  | 0  | 1517004495874  | hive  | xlpatch.openstacklocal  | hive_20180126220815_a335e284-476a-42e0-b758-e181e6ab44e9  || 43.2  | default  | orc_tab_bucketed  | NULL  | ACQUIRED  |  | SHARED_READ  | 3  | 0  | 1517004495874  | hive  | xlpatch.openstacklocal  | hive_20180126220815_a335e284-476a-42e0-b758-e181e6ab44e9  |+----------+-----------+-------------------+------------+-------------+---------------+--------------+-----------------+-----------------+----------------+-------+-------------------------+-----------------------------------------------------------+--+4 rows selected (0.02 seconds)          SCENARIO 5 (Transactional Table)   SELECT does not block INSERT OVERWRITE  SELECT starts first followed by INSERT OVERWRITE   Session A  beeline -u "jdbc:hive2://localhost:10000/default" -n hive -p '' -e "select col1 from orc_tab_bucketed order by col1 limit 2"  Session B  beeline -u "jdbc:hive2://localhost:10000/default" -n hive -p '' -e "ALTER TABLE orc_tab_bucketed ADD COLUMNS (col3 string)"  Session C  0: jdbc:hive2://localhost:10000/default> show locks;Getting log thread is interrupted, since query is done!+----------+-----------+-------------------+------------+-------------+---------------+--------------+-----------------+-----------------+----------------+-------+-------------------------+-----------------------------------------------------------+--+|  lockid  | database  |  table  | partition  | lock_state  |  blocked_by  |  lock_type  | transaction_id  | last_heartbeat  |  acquired_at  | user  |  hostname  |  agent_info  |+----------+-----------+-------------------+------------+-------------+---------------+--------------+-----------------+-----------------+----------------+-------+-------------------------+-----------------------------------------------------------+--+| Lock ID  | Database  | Table  | Partition  | State  | Blocked By  | Type  | Transaction ID  | Last Heartbeat  | Acquired At  | User  | Hostname  | Agent Info  || 53.1  | default  | orc_tab_bucketed  | NULL  | ACQUIRED  |  | SHARED_READ  | NULL  | 1517005855005  | 1517005855005  | hive  | xlpatch.openstacklocal  | hive_20180126223053_db2d0054-6cb6-48fb-b732-6ca677007695  || 54.1  | default  | orc_tab_bucketed  | NULL  | WAITING  | 53.1  | EXCLUSIVE  | NULL  | 1517005855870  | NULL  | hive  | xlpatch.openstacklocal  | hive_20180126223054_6294af5a-15da-4178-9a83-40f150e08cb1  |+----------+-----------+-------------------+------------+-------------+---------------+--------------+-----------------+-----------------+----------------+-------+-------------------------+-----------------------------------------------------------+--+3 rows selected (0.064 seconds)  Synopsis  Without "transactional" feature set to true  
 EXCLUSIVE lock (ALTER) waits for SHARED (SELECT)  EXCLUSIVE lock (INSERT OVERWRITE) waits for SHARED (SELECT)  EXCLUSIVE lock (INSERT) waits for SHARED (SELECT)   With "transactional" enabled  
 EXCLUSIVE lock (ALTER) waits for SHARED (SELECT)  INSERT/SELECT both take SHARED lock  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		01-23-2018
	
		
		07:12 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Yuval Smp Try disabling firewall and iptables, once you are through then rerun ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/lib/mysql-connector-java-5.1.45/mysql-connector-java-5.1.45-bin.jar. Track the ambari-server log to ensure that there are no error. Try again. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-22-2018
	
		
		03:29 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Julian Blin I suppose it is complaining about a rule in auth_to_local config. You can use these two awesome links understand and get an example:   https://community.hortonworks.com/questions/42167/no-rules-applied-to-rangerlookup.html  https://community.hortonworks.com/questions/42167/no-rules-applied-to-rangerlookup.html 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
         
					
				













