Member since 
    
	
		
		
		08-05-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                76
            
            
                Posts
            
        
                10
            
            
                Kudos Received
            
        
                13
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 3493 | 11-06-2018 08:19 PM | |
| 2712 | 08-31-2018 05:34 PM | |
| 1926 | 05-02-2018 02:21 PM | |
| 2771 | 04-28-2018 09:32 PM | |
| 3279 | 11-09-2017 06:02 PM | 
			
    
	
		
		
		10-04-2017
	
		
		10:17 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 User is not going to query the middle manager or historical so no need to have it on Edge nodes.    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-04-2017
	
		
		09:40 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 yes, you can have a bundle of broker router and coordinator overlord over one physical node and you can have a couple of this.  For historicals and middle manager, it depends on your use case, if you have more historical data to serve then you need more historical and vise versa.   For the hive integration, you need to set those.  set hive.druid.metadata.username=${DRUID_USERNAME};
set hive.druid.metadata.password=${DRUID_PASSWORD};
set hive.druid.metadata.uri=jdbc:mysql://${DRUID_HOST}/${DATA_BASE_NAME};
set hive.druid.broker.address.default=${DRUID_HOST}:8082;
set hive.druid.coordinator.address.default=${DRUID_HOST}:8081;
set hive.druid.storage.storageDirectory=/apps/hive/warehouse;  make sure that ,  /apps/hive/warehouse  Is readable by hadoop group.  Starting from HDP 2.6.3 all those properties will be set automatically.   
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-02-2017
	
		
		03:27 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 seems like you are compiling/building with the wrong druid version. Can you please explain how are you building this? are you using druid 0.10.1 or previous release?  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-02-2017
	
		
		01:58 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 can you please past the log error stack? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-29-2017
	
		
		02:54 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 The only fix is to apply the patch and replace the druid-hive-handler jar. It is only one jar that needs to be replaced. Otherwise 2.6.3 will have the fix if you want to wait. Sorry 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-13-2017
	
		
		05:24 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Adrian Gay I don't think the metadata setup is the issue since it is set by default to mysql. What do you see in your hive server logs ? for instance if the hive server can connect to druid broker you should see something like this/    hive.log: error in initSerDe: org.apache.hadoop.hive.serde2.SerDeException Connected to Druid but could not retrieve datasource information
org.apache.hadoop.hive.serde2.SerDeException: Connected to Druid but could not retrieve datasource information
        at org.apache.hadoop.hive.druid.serde.DruidSerDe.submitMetadataRequest(DruidSerDe.java:296) ~[hive-druid-handler-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.druid.serde.DruidSerDe.initialize(DruidSerDe.java:178) ~[hive-druid-handler-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.serde2.AbstractSerDe.initialize(AbstractSerDe.java:54) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.serde2.SerDeUtils.initializeSerDe(SerDeUtils.java:531) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:436) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:423) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:279) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:261) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.ql.metadata.Table.getColsInternal(Table.java:639) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:622) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:833) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:869) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4227) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:347) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1905) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1607) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1354) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1123) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1111) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233) ~[hive-cli-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184) ~[hive-cli-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403) ~[hive-cli-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821) ~[hive-cli-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759) ~[hive-cli-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686) ~[hive-cli-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_77]
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_77]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_77]
        at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_77]
        at org.apache.hadoop.util.RunJar.run(RunJar.java:233) ~[hadoop-common-2.7.3.2.5.0.1-216.jar:?]
        at org.apache.hadoop.util.RunJar.main(RunJar.java:148) ~[hadoop-common-2.7.3.2.5.0.1-216.jar:?]
2017-09-13T10:16:51,941 ERROR [6f8f271e-3af1-4385-a79a-4d37aecb9fde main] metadata.Table: Unable to get field from serde: org.apache.hadoop.hive.druid.serde.DruidSerDe
java.lang.RuntimeException: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException Connected to Druid but could not retrieve datasource information)
        at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:281) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:261) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.ql.metadata.Table.getColsInternal(Table.java:639) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:622) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:833) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:869) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4227) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:347) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1905) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1607) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1354) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1123) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1111) ~[hive-exec-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233) ~[hive-cli-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184) ~[hive-cli-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403) ~[hive-cli-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821) ~[hive-cli-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759) ~[hive-cli-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686) ~[hive-cli-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT]
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_77]
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_77]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_77]
        at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_77]
        at org.apache.hadoop.util.RunJar.run(RunJar.java:233) ~[hadoop-common-2.7.3.2.5.0.1-216.jar:?]
        at org.apache.hadoop.util.RunJar.main(RunJar.java:148) ~[hadoop-common-2.7.3.2.5.0.1-216.jar:?] 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-13-2017
	
		
		02:22 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Adrian Gay either your broker address setting is wrong, this can be fixed by running hive with this config   --hiveconf hive.druid.broker.address.default=hostname_of_broker:8082 or the data source you are referring to doesn't exist in druid yet   ("druid.datasource"="wikiticker") 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-07-2017
	
		
		05:19 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Arnault Droz this seems to correspond to this issue   https://devcentral.f5.com/questions/wrong-logrotate-configuration-for-var-log-monitors-generating-an-excess-of-cron-email-messages  i am not sure what we are not hitting this, but it seems to be an OS miss match version maybe.  Can you please try to fix it locally by editing        template file from ambari UI and replace "  size {{druid_log_maxfilesize}}MB  " with   size {{druid_log_maxfilesize}}M 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-03-2017
	
		
		04:57 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 
	Hi @Arnault Droz i have not used Kylin at All, but i can speak for druid + hive and  Superset (visi tool) if you have any questions about that.  
	Here is a link to blog about some Star Schema Benchmark (6 Billion rows scale setup) This should give you an idea about the performance that you should expect if you go the Druid route to store the cubes.  Please let me know if you have further question. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-02-2017
	
		
		02:22 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @yjiang am suspecting this NPE is a side effect of running out of memory ? can you share the config of historical ? also i see that the number of allocated byte buffers is at least 77 times 1GB, does this machine has all that free ram ? How many physical cores this machine has ? usually we recommend that the number of intermediate processing buffers is number of physical core - 1 at max.    
						
					
					... View more