I had imported a 100GB+ of parquet data into a manually derived schema table in Hue. I had 14 of the 46 columns defined and imported the data successfully. My next step was to define all columns. When I imported the data, I got an error:
java.io.IOException: org.apache.hadoop.hive.ql.metadata.HiveException: Cannot inspect org.apache.hadoop.io.ArrayWritable
The parquet files no longer appear in the hdfs filesystem. Is there a way to find out what happened to the data? What went wrong? Can it be corrected?