Support Questions

Find answers, ask questions, and share your expertise

Hive/Beeline : creation of table with subarrays

avatar
New Contributor

I'm trying to create a table on Hive by using Beeline. The data are stocked in HDFS as parquet files.

Here, is the error i have when i do a SELECT * FROM datalake.test :

Error: java.io.IOException: org.apache.parquet.io.ParquetDecodingException: Can not read value at 0 in block -1 in file hdfs://tmp/test/part-r-00000-7e58b193-a08f-44b1-87fa-bb12b4053bdf.gz.parquet (state=,code=0)

The data have the following schema :

{
  "object_type":"test",
  "heartbeat":1496755564224,
  "events":[
    {
      "timestamp":1496755582985,
      "hostname":"hostname1",
      "instance":"instance1",
      "metrics_array":[
        {
          "metric_name":"metric1_1",
          "metric_value":"value1_1"
        }
      ]
    },
    {
      "timestamp":1496756626551,
      "hostname":"hostname2",
      "instance":"instance1",
      "metrics_array":[
        {
          "metric_name":"metric2_1",
          "metric_value":"value2_1"
        }
      ]
    }
  ]
}

My hql script used for the creation of the table is the following :

set hive.support.sql11.reserved.keywords=false;

CREATE DATABASE IF NOT EXISTS datalake;

DROP TABLE IF EXISTS datalake.test;

CREATE EXTERNAL TABLE IF NOT EXISTS datalake.test
  (
     object_type STRING,
     heartbeat BIGINT,
     events STRUCT <
       metrics_array: STRUCT <
       metric_name: STRING,
       metric_value: STRING
       >,
       timestamp: BIGINT,
       hostname: STRING,
       instance: STRING
     >
)
STORED AS PARQUET
LOCATION '/tmp/test/'
1 REPLY 1

avatar

@Adrien Mafety

The issue might be related to PARQUET-377 when the parquet file is created from different version and Hive uses different version o f Parquet.