Hi,
I am creating table using the following DDL:
CREATE TABLE poc_db.poc_druid_v2_07may
(`__time` timestamp, col1 string, col2 string, metric1 double, trans_count int)
STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler' TBLPROPERTIES ( "druid.segment.granularity" = "DAY", "druid.query.granularity" = "DAY");
My objective is to create a hive table with Druid storage handler. And insert data into this table. When i execute the above DDL, i am getting following error.
0: jdbc:hive2://HYDHADDAT01:10500> CREATE TABLE poc_db.poc_druid_v2_07may (`__time` timestamp, col1 string, col2 string, metric1 double, trans_count int) STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler' TBLPROPERTIES ( "druid.segment.granularity" = "DAY", "druid.query.granularity" = "DAY");
INFO : Compiling command(queryId=hive_20180507130925_227f2e48-d049-464e-b2cd-43009b3398b3): CREATE TABLE poc_db.poc_druid_v2_07may (`__time` timestamp, col1 string, col2 string, metric1 double, trans_count int) STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler' TBLPROPERTIES ( "druid.segment.granularity" = "DAY", "druid.query.granularity" = "DAY")
INFO : We are setting the hadoop caller context from HIVE_SSN_ID:e2b2c4e9-d858-4d84-8556-445a01e70657 to hive_20180507130925_227f2e48-d049-464e-b2cd-43009b3398b3
INFO : Semantic Analysis Completed
INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null)
INFO : Completed compiling command(queryId=hive_20180507130925_227f2e48-d049-464e-b2cd-43009b3398b3); Time taken: 0.004 seconds
INFO : We are resetting the hadoop caller context to HIVE_SSN_ID:e2b2c4e9-d858-4d84-8556-445a01e70657
INFO : Setting caller context to query id hive_20180507130925_227f2e48-d049-464e-b2cd-43009b3398b3
INFO : Executing command(queryId=hive_20180507130925_227f2e48-d049-464e-b2cd-43009b3398b3): CREATE TABLE poc_db.poc_druid_v2_07may (`__time` timestamp, col1 string, col2 string, metric1 double, trans_count int) STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler' TBLPROPERTIES ( "druid.segment.granularity" = "DAY", "druid.query.granularity" = "DAY")
INFO : Starting task [Stage-0:DDL] in serial mode
ERROR : FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.RuntimeException: java.io.FileNotFoundException: File /tmp/workingDirectory/.staging-hive_20180507130925_227f2e48-d049-464e-b2cd-43009b3398b3/segmentsDescriptorDir does not exist.
INFO : Resetting the caller context to HIVE_SSN_ID:e2b2c4e9-d858-4d84-8556-445a01e70657
INFO : Completed executing command(queryId=hive_20180507130925_227f2e48-d049-464e-b2cd-43009b3398b3); Time taken: 0.385 seconds
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.RuntimeException: java.io.FileNotFoundException: File /tmp/workingDirectory/.staging-hive_20180507130925_227f2e48-d049-464e-b2cd-43009b3398b3/segmentsDescriptorDir does not exist. (state=08S01,code=1)
I think hive the file it is trying to create in "/tmp/workingDirectory...." is critical when SQLs are executed subsequently. Is there a way for me to configure the location of this file, instead of /tmp?
How can i fix the above error? i have enough space in /tmp. Please help.