Archives of Support Questions (Read Only)

This is an archived board for historical reference. Information and links may no longer be available or relevant
Announcements
This board is archived and read-only for historical reference. To ask a new question, please post a new topic on the appropriate active board.

HBASE Caused by: java.io.IOException: Wrong number of partitions in keyset

avatar
Rising Star

Hi when i create table in hbase with pre split i get this error . I am running this on my VM .

This is how i create table

  public static void main(String[] args) throws Exception {

    if (hbaseConf == null)
        hbaseConf = getHbaseConfiguration();
    String outputPath = args[2];
    hbaseConf.set("data.seperator", DATA_SEPERATOR);
    hbaseConf.set("hbase.table.name", args[0]);
    hbaseConf.setInt(MAX_FILES_PER_REGION_PER_FAMILY, 1024);

    Job job = new Job(hbaseConf);
    job.setJarByClass(HBaseBulkLoadDriver.class);
    job.setJobName("Bulk Loading HBase Table::" + args[0]);
    job.setInputFormatClass(TextInputFormat.class);
    job.setMapOutputKeyClass(ImmutableBytesWritable.class);
    job.setMapperClass(HBaseBulkLoadMapperUnzipped.class);
    job.getConfiguration().set("mapreduce.job.acl-view-job", " bigdata-project-fricadev");
    //if (HbaseBulkLoadMapperConstants.FUNDAMENTAL_ANALYTIC.equals(args[0])) {
        HTableDescriptor descriptor = new HTableDescriptor(Bytes.toBytes(args[0]));
        descriptor.addFamily(new HColumnDescriptor(COLUMN_FAMILY));
        HBaseAdmin admin = new HBaseAdmin(hbaseConf);
        byte[] startKey = new byte[16];
        Arrays.fill(startKey, (byte) 0);
        byte[] endKey = new byte[16];
        Arrays.fill(endKey, (byte) 255);
        admin.createTable(descriptor, startKey, endKey, REGIONS_COUNT);
        admin.close();
    
    FileInputFormat.setInputPaths(job, args[1]);
    FileOutputFormat.setOutputPath(job, new Path(outputPath));

    job.setMapOutputValueClass(Put.class);
    HFileOutputFormat.configureIncrementalLoad(job, new HTable(hbaseConf, args[0]));

    System.exit(job.waitForCompletion(true) ? 0 : -1);

    System.out.println("job is successfull..........");

    HBaseBulkLoad.doBulkLoad(outputPath, args[0]);

    }

This is my exception

  java.lang.Exception: java.lang.IllegalArgumentException: Can't read partitions file
     at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:406)
  Caused by: java.lang.IllegalArgumentException: Can't read partitions file
     at org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner.setConf(TotalOrderPartitioner.java:108)
     at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
     at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
     at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:587)
     at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:656)
     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:330)
     at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:268)
     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
     at java.lang.Thread.run(Thread.java:745)
  Caused by: java.io.IOException: Wrong number of partitions in keyset
     at org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner.setConf(TotalOrderPartitioner.java:82)
     ... 11 more
  16/11/28 22:13:41 INFO mapred.JobClient:  map 0% reduce 0%
  16/11/28 22:13:41 INFO mapred.JobClient: Job complete: job_local1390185166_0001
  16/11/28 22:13:41 INFO mapred.JobClient: Counters: 0
  Job failed...
1 ACCEPTED SOLUTION

avatar
New Member

We got this exception after migrating one table from another cluster. Root cause was duplicate region created with the same start-key and end-key.

View solution in original post

8 REPLIES 8

avatar

Are you specifying number of reducers for the job not equal to the no. of regions in your table?

avatar
Rising Star

No i have not specified anywhere this .

avatar
Rising Star

avatar

I don't see much problem with the code. can you try adding a debug point in TotalOrderPartitioner.setConf() // line 88 or something and see why split points are different while reading from partition file.

avatar
Rising Star

I have 95 Live Nodes in my cluster and REGIONS_COUNT i am passing as 90

avatar
New Member

We got this exception after migrating one table from another cluster. Root cause was duplicate region created with the same start-key and end-key.

avatar
New Member

@Ranjith Uthaman What did you do to solve the issue? I have exported the HBase table data from one cluster. I am creating hfiles from the exported data and getting this error. Please help.

avatar
New Member

Hi Vijay.. did u solve this issue? I am having same exception . kindly share .