Support Questions

Find answers, ask questions, and share your expertise
Celebrating as our community reaches 100,000 members! Thank you!

Customized class INPUTFORMAT for hive external table extending AvroContainerInputFormat is not working with MapReduce task

New Contributor

I have an HDFS path including two different file format
1- Avro


I want to build a hive external table on that path but only for Avro files. I do not have the option to have each file format in a separate location and I could not find a way to pick those file based on the file extension for the hive external table

the suggested solutions was to build a custom class which will fitter the files based on the targeted type which is extended for the main avro input class AvroContainerInputFormat.



import org.apache.hadoop.mapred.FileSplit;
import org.apache.hadoop.mapred.InputSplit;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.Reporter;
import org.apache.hadoop.mapred.RecordReader;

import java.util.ArrayList;
import java.util.List;
import org.apache.avro.generic.GenericRecord;
import org.apache.avro.mapred.AvroWrapper;

public class CustomAvroContainerInputFormat extends AvroContainerInputFormat {

    public InputSplit[] getSplits(JobConf job, int numSplits) throws IOException {
        // Get the original splits
        InputSplit[] originalSplits = super.getSplits(job, numSplits);

        // Create a list to store the filtered splits
        List<InputSplit> filteredSplits = new ArrayList<>();

        // Your specific file extension
        String fileExtension = ".avroSnappy";

        for (InputSplit split : originalSplits) {
            // Check if the file path ends with the specified extension
            String filePath = ((FileSplit) split).getPath().toString();

            if (filePath.endsWith(fileExtension)) {
             System.out.println("File Path considered: " + filePath);

        // Convert the list to an array
        InputSplit[] filteredSplitsArray = new InputSplit[filteredSplits.size()];
        return filteredSplits.toArray(filteredSplitsArray);

    // You can override other methods as needed
    // ...




Above is the customized class

the class is added to hive using below command

ADD JAR /path/CustomDSAvroInputFormat.jar;

and the hive external table is created as bellow


    event_day STRING
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
LOCATION '.....'
TBLPROPERTIES ('avro.schema.url'='/path/AVRO_SCHEMA_FILE');



Now when I do select * from the table, I am getting the result as expected and the data being shown normally

But when I am using select count (*) or where clause, it's trying to create mapreduc job and somehow it's not using the customized class so it's scan the binary files as well and getting failed

select * resulte


> select  event_day  from <table_name> limit 2;
File Path considered: hdfs://<URL><path>/c5f085e1-7a81-47e2-981e-702982c0ac00.avroSnappy
Time taken: 2.041 seconds, Fetched: 2 row(s)


select count


select  count(*)  from <table_name> limit 2;
Query ID = user_20240214072828_73a1c989-a71f-4153-93e0-5e719b949014
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1707415269562_0043, Tracking URL = <url>/application_1707415269562_0043/
Kill Command = /opt/cloudera/parcels/CDH-5.16.1-1.cdh5.16.1.p0.3/lib/hadoop/bin/hadoop job  -kill job_1707415269562_0043
Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 1
2024-02-14 07:28:34,044 Stage-1 map = 0%,  reduce = 0%
2024-02-14 07:28:52,709 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_1707415269562_0043 with errors
Error during job, obtaining debugging information...
Examining task ID: task_1707415269562_0043_m_000001 (and more) from job job_1707415269562_0043

Task with the most failures(4):
Task ID:

Diagnostic Messages for this Task:
Error: java.lang.reflect.InvocationTargetException
        at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(
        at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.<init>(
        at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getRecordReader(
        at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.<init>(
        at org.apache.hadoop.mapred.MapTask.runOldMapper(
        at org.apache.hadoop.mapred.YarnChild$
        at Method)
        at org.apache.hadoop.mapred.YarnChild.main(
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
        at java.lang.reflect.Constructor.newInstance(
        at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(
        ... 11 more
Caused by: Not a data file.
        at org.apache.avro.file.DataFileStream.initialize(
        at org.apache.avro.file.DataFileReader.<init>(
        ... 16 more

FAILED: Execution Error, return code 2 from
MapReduce Jobs Launched:
Stage-Stage-1: Map: 2  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec


so, is there any ideas why and how can i fix this or an alternative solution?

Thanks in advance

I tried to create view on top of the main table or using nested queries, but it did not work also.


Super Collaborator


The issue you're encountering with the Hive external table, where it fails when using COUNT(*) or WHERE clauses, seems to be related to the custom input format not being properly utilized during query execution. This can lead to errors when Hive attempts to read the files using the default input format.

Ensure Custom Input Format is Used: Verify that the custom input format (CustomAvroContainerInputFormat) is correctly configured and loaded in the Hive environment. Confirm that the JAR containing the custom input format class is added to the Hive session or cluster, and that there are no errors or warnings during the JAR loading process.

Check Table Properties: Ensure that the custom input format class is correctly specified in the table properties (INPUTFORMAT), and that there are no typos or syntax errors in the table definition.

Test with Basic Queries: Start with basic queries (SELECT *) to ensure that the custom input format is properly utilized and data can be read from the Avro files(I think it is working). If basic queries work fine but more complex queries fail, it may indicate issues with the input format's compatibility with certain Hive operations.

Consider Alternative Approaches: If troubleshooting the custom input format does not resolve the issue, consider alternative approaches for filtering the files based on their format. For example, you could pre-process the data to separate Avro and JSON files into different directories or partitions, or use other techniques such as external scripts or custom SerDes to handle different file formats within the same directory.



Chethan YM