Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Hive table with parquet data showing 0 records

avatar
Expert Contributor

hello - i've a parquet file, and i've created an EXTERNAL Hive table on top of the parquet file.

When i try to query the table, it give 0 rows, any ideas what the issue might be ?

hdfs dfs -ls hdfs://abc/apps/hive/warehouse/amp.db/power/year=2017/month=12/day=01

-rw-r--r-- 2 pstl hdfs141913174 2017-12-01 22:33 hdfs://abc/apps/hive/warehouse/amp.db/power/year=2017/month=12/day=01/part-00023-e749dbd1-63a9-499d-932e-a6eadf03a67c.c000.snappy.parquet

Table created :

CREATE EXTERNAL TABLE power_k1(topic_k varchar(255), partition_k int, offset_k bigint, timestamp_k timestamp, deviceid bigint,  devicename varchar(50), deviceip varchar(128), peerid int, objectid int,  objectname varchar(256),  objectdesc varchar(256), oid varchar(50),  pduoutlet varchar(50), pluginid int,pluginname varchar(255),  indicatorid int, indicatorname varchar(255),  format int, snmppollvalue varchar(128) COMMENT 'value in sevone kafka avsc',time double,  clustername varchar(50) COMMENT 'rpp or power',  peerip varchar(50))  COMMENT 'external table at /apps/hive/warehouse/amp.db/sevone_power' PARTITIONED BY (  year int,  month int, day int)  STORED AS PARQUET LOCATION '/apps/hive/warehouse/amp.db/power' 

select count(1) from power_k1 -> returns 0 records

Any ideas what the issue might be & how to debug this ?

1 ACCEPTED SOLUTION

avatar
Super Collaborator

Hi @Karan Alang,

For an external Partitioned table, we need to update the partition metadata as the hive will not be aware of these partitions unless the explicitly updated

that can be done by either

ALTER TABLE power_k1 RECOVER PARTITIONS;
//or
MSCK REPAIR TABLE power_k1;

more on this can be found from hive DDL Language Manual.

Hope this helps !!

View solution in original post

6 REPLIES 6

avatar
Super Collaborator

Hi @Karan Alang,

For an external Partitioned table, we need to update the partition metadata as the hive will not be aware of these partitions unless the explicitly updated

that can be done by either

ALTER TABLE power_k1 RECOVER PARTITIONS;
//or
MSCK REPAIR TABLE power_k1;

more on this can be found from hive DDL Language Manual.

Hope this helps !!

avatar
Expert Contributor

thanks @bkosaraju - that worked !

avatar
Expert Contributor
@bkosaraju

i'm getting following error in querying the table, any ideas ?

0: jdbc:hive2://msc02-jag-hve-002.uat.gdcs.ap> select deviceid, devicename, indicatorname, topic_k, partition_k, offset_k from powerpoll where year=2017 and month=12 and day=11 limit 5;

Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 86.0 failed 4 times, most recent failure: Lost task 0.3 in stage 86.0 (TID 19049, msc02-jag-dn-011.uat.gdcs.apple.com): java.lang.UnsupportedOperationException: org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainIntegerDictionaryat org.apache.parquet.column.Dictionary.decodeToLong(Dictionary.java:52)at org.apache.spark.sql.execution.vectorized.OnHeapColumnVector.getLong(OnHeapColumnVector.java:274)at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:246)at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:240)at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)at org.apache.spark.scheduler.Task.run(Task.scala:86)at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)at java.lang.Thread.run(Thread.java:748)

avatar
Super Collaborator

hi @Karan Alang, looks like the column ("format") is a reserved word causing the problem, please exclude that from selection and have a try.

avatar
Expert Contributor

@bkosaraju - this is the query fired ..

select deviceid, devicename, indicatorname, topic_k, partition_k, offset_k from powerpoll where year=2017 and month=12 and day=11 limit 5;

There is no column called format, can you pls. clarify what you meant ?

avatar
Expert Contributor

@bkosaraju - .. i re-checked this & the issue seems to be when i include the column - deviceid bigint - in the query