Member since
04-28-2018
14
Posts
0
Kudos Received
0
Solutions
05-10-2018
03:05 PM
Hi Erkan, Can you please provide more details on how you resolved the issue. Iam facing same issue while installing HDP 2.6.4 Thanks in advance.
... View more
05-10-2018
03:04 PM
Hi Erkan, can you please provide more details on how your resolved the issue. I am also facing same issue while installing HDP 2.6.4 Thanks in advance.
... View more
05-07-2018
02:13 PM
When we create table as follows:
CREATE TABLE druid_table (`__time` timestamp,`userid`string,`num_l`float)STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler'TBLPROPERTIES ("druid.segment.granularity"="DAY");
We are getting following error:
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.RuntimeException: java.io.FileNotFoundException: File /tmp/workingDirectory/.staging-hive_20180507130925_227f2e48-d049-464e-b2cd-43009b3398b3/segmentsDescriptorDir does not exist. (state=08S01,code=1)
Can you please help?
... View more
05-02-2018
11:23 AM
Hi, I am creating a table as following. The data till say, 10-apr-2018, is loaded. How do i load data from 11apr to latest day? If i do insert into table test_druid, it is failing. Do i need to drop the month segment (apr-18) and load the data again for entire apr-18 month? If so, can you please give steps on how to do from hive. I am using beeline to do all my operations. CREATE TABLE test_druid STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler' TBLPROPERTIES ("druid.datasource"="test_druid","druid.segment.granularity"="MONTH","druid.query.granularity"="DAY") asselect cast(trans_date as timestamp)as`__time`, col1, col2, col3 from testdb.test_hive_Table where to_date(trans_Date)>='2018-01-01';
... View more
Labels:
- Labels:
-
Apache Hive
05-02-2018
11:19 AM
Hi,
I am creating a table in Hive (LLAP) using an existing table. as follows:
CREATE TABLE test_druid STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler'
TBLPROPERTIES ( "druid.datasource" = "test_druid", "druid.segment.granularity" = "MONTH", "druid.query.granularity" = "DAY")
as select cast(trans_date as timestamp) as `__time` , col1, col2, col3 from testdb.test_hive_Table where to_date(trans_Date) >= '2018-01-01';
I am getting following error. No where i mentioned the druid storage as "/druid/segments". I dont know from where it is picking it.
In Ambari, i have set druid.storage.storageDirectory=/user/druid/data. Not sure what is causing the issue.
Please help.
Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: org.apache.hadoop.security.AccessControlException: Permission denied: user=hive, access=WRITE, inode="/druid/segments/ff7e385a0fbf4d84973be64d88f31c02":hdfs:hdfs:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
at org.apache.hadoop.hdfs.server.namenode.FS
<br>
... View more
Labels:
- Labels:
-
Apache Hive
05-02-2018
09:17 AM
Hi Slim, Thanks for your response. That worked. I have few more questions, i will open a new thread.
... View more
04-28-2018
06:35 PM
hive-llap-druid-eror.txt Hi, I am testing Druid with very small amount of data, treat my case as POC before working on big data set. When creating table on Druid, iam getting Mysql jdbc error. Thought i have specified postgres as my database for metastore. Am i missing my configuration parameter? Please help. Thanks in advance.
... View more
Labels:
- Labels:
-
Apache Hive