Member since
05-30-2016
14
Posts
6
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
17434 | 05-25-2018 03:18 PM | |
47316 | 03-29-2018 04:13 PM | |
652 | 04-14-2017 09:30 PM |
03-26-2019
04:50 PM
Currently the best way would be to query Druid using Apache Hive and use the ability of Hive to enforce Ranger Policies on druid backed tables.
... View more
02-06-2019
04:38 PM
@Sumeet Chauhan can you add more details on how you manually added the extensions and what all extensions you deleted on first place?
... View more
09-07-2018
06:53 PM
1 Kudo
Hi Thuy Le, Realtime nodes have been deprecated and are replaced by Realtime Index Tasks. For ingesting data from kafka you can use kafka indexing service - http://druid.io/docs/latest/development/extensions-core/kafka-ingestion.html Here is a tutorial for the same - http://druid.io/docs/latest/tutorials/tutorial-kafka.html
... View more
05-25-2018
03:18 PM
2 Kudos
@Khouloud LandariIn HDP superset is installed inside a python virtual environment. In order to install psycopg2, you will need to run following command - /usr/hdp/current/superset/bin/pip install psycopg2
Alternatively, We package superset with pygresql and you can change the connection URI to use that - postgresql+pygresql://user:password@host:port/dbname
... View more
03-29-2018
04:13 PM
@jmedel The root cause of failure is invalid parse spec java.lang.IllegalArgumentException: Instantiation of [simple type, class io.druid.data.input.impl.DelimitedParseSpec] value failed: If columns field is not set, the first row of your data must have your header and hasHeaderRow must be set to true.
Check if your input file has a header row or not. If yes, set hasHeaderRow = true in parseSpec, otherwise you need to specify list of columns in your parseSpec so that druid knows about the columns present in the file.
... View more
03-27-2018
03:08 PM
@Sateesh Battu, Looks like you are having conflict with your logging libraries. Below is the main reason for failure. make sure you are connecting to hive-server2 and have correct logging jars in the classpath. java.lang.NoSuchMethodError: org.apache.hadoop.hive.ql.session.SessionState$LogHelper.<init>(Lorg/slf4j/Logger;)V at org.apache.hadoop.hive.druid.DruidStorageHandler.<clinit>(DruidStorageHandler.java:93)
... View more
03-27-2018
03:02 PM
Hi Jasper, Changing the visualization type to Big Number should work and display it as a Big Number, After that you will need to click on "Save As" button and add this slice to any dashboard of your choice.
... View more
03-14-2018
02:25 PM
Please also share spec file - hadoop_index_spec.json and complete yarn application logs.
... View more
03-14-2018
02:06 PM
Great Article, Showcasing various technologies playing together. You could possibly also simplify above flow a bit by skipping tranquility -> Twitter -> Nifi -> Kafka -> Druid Druid supports directly ingesting data from kafka (without tranquility too) See - http://druid.io/docs/latest/development/extensions-core/kafka-ingestion.html
... View more
12-13-2017
01:15 PM
It is failing when trying to connect to druid-broker. Check following things - 1. you have set property "hive.druid.broker.address.default" in hive to correct "<broker-ip>:<port>" value 2. druid broker is running, you can check this by hitting "http://<broker-ip>:<port>/status" HTTP endpoint.
... View more
12-07-2017
03:47 PM
From the description it seems that the port forwarding on the vm is not set properly. please make sure port forwarding is setup correctly.
... View more
10-18-2017
08:54 AM
can you try setting hadoop.mapreduce.job.classloader=true in your mapreduce configs ?
... View more
06-08-2017
07:08 PM
https://issues.apache.org/jira/browse/HIVE-16576 should fix this.
... View more
04-14-2017
09:30 PM
3 Kudos
Thanks for reporting this. seems like an issue with the packaging on Centos7. We are working on fixing it. In the meantime, can you try with changing the superset database type to 'mysql' or 'postgresql' ? I verified that it works with mysql.
... View more