Support Questions

Find answers, ask questions, and share your expertise

Getting empty output file after running hadoop code

New Contributor

This is the below output iam getting after running my code i have attached my file of the code

kindly revert back

as iam new to hadoop and have been working on this error for the last couple of days

17/05/16 14:17:01 INFO mapred.LocalJobRunner: reduce > reduce

17/05/16 14:17:01 INFO mapred.Task: Task 'attempt_local1523217778_0001_r_000000_0' done.

17/05/16 14:17:01 INFO mapred.LocalJobRunner: Finishing task: attempt_local1523217778_0001_r_000000_0

17/05/16 14:17:01 INFO mapred.LocalJobRunner: reduce task executor complete.

17/05/16 14:17:01 INFO mapreduce.Job: Job job_local1523217778_0001 running in uber mode : false

17/05/16 14:17:01 INFO mapreduce.Job: map 0% reduce 100%

17/05/16 14:17:01 INFO mapreduce.Job: Job job_local1523217778_0001 completed successfully

17/05/16 14:17:01 INFO mapreduce.Job: Counters: 29

File System Counters

FILE: Number of bytes read=53030675

FILE: Number of bytes written=53719810

FILE: Number of read operations=0

FILE: Number of large read operations=0

FILE: Number of write operations=0

HDFS: Number of bytes read=0

HDFS: Number of bytes written=0

HDFS: Number of read operations=6

HDFS: Number of large read operations=0

HDFS: Number of write operations=3

Map-Reduce Framework

Combine input records=0

Combine output records=0

Reduce input groups=0

Reduce shuffle bytes=0

Reduce input records=0

Reduce output records=0

Spilled Records=0

Shuffled Maps =0

Failed Shuffles=0

Merged Map outputs=0

GC time elapsed (ms)=0

Total committed heap usage (bytes)=123797504

Shuffle Errors

BAD_ID=0

CONNECTION=0

IO_ERROR=0

WRONG_LENGTH=0

WRONG_MAP=0

WRONG_REDUCE=0

File Output Format Counters

Bytes Written=0

My code

config class

import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.util.GenericOptionsParser; public class IPLTConfig { public static void main(String... args) throws Throwable { Configuration conf = new Configuration(); Job job = new Job(conf, "IPLT"); String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs(); job.setCombinerClass(IPLTReducer.class); job.setReducerClass(IPLTReducer.class); job.setMapperClass(IPLTMapper.class); job.setJarByClass(IPLTConfig.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(otherArgs[0])); FileOutputFormat.setOutputPath(job, new Path(otherArgs[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); } }

import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; public class IPLTMapper extends Mapper<LongWritable, Text, Text, IntWritable> { private final IntWritable one = new IntWritable(1); private Text word = new Text(); @Override protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { StringTokenizer iter = new StringTokenizer(value.toString()); while (iter.hasMoreTokens()) { word.set(iter.nextToken()); context.write(word, one); } } }

Reducer class

import java.io.IOException; import java.util.Iterator; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; public class IPLTReducer extends Reducer<Text, IntWritable, Text, IntWritable> { private IntWritable result = new IntWritable(); @Override protected void reduce(Text word, Iterable<IntWritable> intOne, Context context) throws IOException, InterruptedException { int sum = 0; Iterator<IntWritable> iter = intOne.iterator(); while (iter.hasNext()) sum += iter.next().get(); result.set(sum); context.write(word, result); } }

1 REPLY 1

Expert Contributor

Your code is fine, I copy-pasted it to a project and it works.

Could you please provide the followings:

  • Hadoop version you're using
  • How you called the MapReduce job
  • Complete logs of the job
Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.