Member since
06-14-2016
7
Posts
2
Kudos Received
0
Solutions
06-17-2016
07:02 PM
1 Kudo
@Chris Nauroth Many thanks for answering the question and you saved my day. On the same note, I have another question. I have executed the query setting the property to flalse, do we need to set the property to flase every time we execute the query in production? Or while creating the table I can set that to false? Thanks again.
... View more
06-17-2016
03:46 PM
1 Kudo
Hello Exports I have created a partition table as below SET hive.exec.dynamic.partition=true;
SET hive.exec.dynamic.partition.mode=nonstrict;
CREATE EXTERNAL TABLE `table`(
no of clomuns ))
PARTITIONED BY (year INT,month INT,day INT)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '|'
STORED AS parquet
LOCATION
'path'; While querying the tables with on of the partition column, I am getting the following error, for all other regular column it is working fine. Thanks. 0: jdbc:hive2: //> select * from table where year=2015;
Error: java.io.IOException: java.lang.IllegalArgumentException: Column [year] was not found in schema! (state=,code=0) Can you please let me know what I am doing wrong here.
... View more
Labels:
- Labels:
-
Apache Hive
06-14-2016
02:32 PM
CREATE EXTERNAL TABLE `table1 `( |
| columns ) |
| PARTITIONED BY ( |
| `dt` string) |
| ROW FORMAT DELIMITED |
| FIELDS TERMINATED BY '|' |
| STORED AS INPUTFORMAT |
| 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat' |
| OUTPUTFORMAT |
| 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat' |
| LOCATION |
| 'hdfs:location' |
| TBLPROPERTIES ( |
| 'transient_lastDdlTime'='1465914063')
... View more
06-14-2016
02:30 PM
Thanks Sindhu, I have already tested that and working fine. However, I am wonder why it is not working.
... View more
06-14-2016
02:06 PM
Hello Experts ! I have create the table1 using the custom cobol serde and table 2 with the same ddl but the storage format is different(parquet) and partitioned , later while trying to insert the data into table2 using the fololwing statement, I am getting the below error. Could you please have a look and guide me? Statement: insert overwrite table table2 PARTITION(dt='2016.06.13') select * from table1; . Error: Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"c5qtsys_detail_clnt_no":"2430","c5qtsys_detail_acct_no":"4510109900005012 ","c5qtsys_detail_log_date":20160406,"c5qtsys_detail_auth_log_time":151837,"c5qtsys_detail_maint_date":20151020,"c5qtsys_detail_maint_time":131009,"c5qtsys_detail_operator_id":"BLANMA","c5qtsys_detail_prev_frd_flag":"N","c5qtsys_detail_curr_frd_flag":"N","c5qtsys_detail_trans_amt":"+0000003.61","c5qtsys_detail_message_type":100,"c5qtsys_detail_reference_no":" ","c5qtsys_detail_trans_date":2015289,"c5qtsys_detail_trans_id":"305289672360311 ","c5qtsys_detail_trans_apvl_cd":"010837","c5qtsys_detail_merchant_class":"8661","filler":" "} at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:545) at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:83) ... 17 more Caused by: java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be cast to org.apache.hadoop.hive.serde2.io.ParquetHiveRecord at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:124) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:753) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:838) at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:88) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:838) at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:97) at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:164) at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:535)
... View more
Labels:
- Labels:
-
Apache Hive