Reply
New Contributor
Posts: 1
Registered: ‎09-01-2014

Null pointer exception in reducer

Hi There,

 

I am facing the null pointer exception in the reducer side,but my mapper is running fine,

 

please find my code below 

 

Toppermain.java

package TopperPackage;

import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;

 

 

public class TopperMain {
//hadoop jar worcount.jar ars[0] args[1]
public static void main(String[] args) throws Exception {
Job myhadoopJob = new Job();

myhadoopJob.setJarByClass(TopperMain.class);
myhadoopJob.setJobName("Finding topper based on subject");

FileInputFormat.addInputPath(myhadoopJob, new Path(args[0]));
FileOutputFormat.setOutputPath(myhadoopJob, new Path(args[1]));

myhadoopJob.setInputFormatClass(TextInputFormat.class);
myhadoopJob.setOutputFormatClass(TextOutputFormat.class);

myhadoopJob.setMapperClass(TopperMapper.class);
myhadoopJob.setReducerClass(TopperReduce.class);


myhadoopJob.setMapOutputKeyClass(Text.class);
myhadoopJob.setMapOutputValueClass(Text.class);
myhadoopJob.setOutputKeyClass(Text.class);
myhadoopJob.setOutputValueClass(Text.class);

System.exit(myhadoopJob.waitForCompletion(true) ? 0 : 1);

 

}

}
___________________________________________________________________________

TopperMapper.java

package TopperPackage;

import java.io.IOException;

import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
/*Surender,87,60,50,50,80
Raj,80,70,80,85,60
Anten,81,60,50,70,100
Dinesh,60,90,80,80,70
Priya,80,85,91,60,75
*/
public class TopperMapper extends Mapper<LongWritable, Text, Text, Text>

{
String temp,temp2;

protected void map(LongWritable key, Text value,Context context)
throws IOException, InterruptedException {
String record = value.toString();
String[] parts = record.split(",");
temp=parts[0];

temp2=temp+ "\t" + parts[1];
context.write(new Text("Tamil"),new Text(temp2));
temp2=temp+ "\t" + parts[2];
context.write(new Text("English"),new Text(temp2));

temp2=temp+ "\t" + parts[3];
context.write(new Text("Maths"),new Text(temp2));

temp2=temp+ "\t" + parts[4];
context.write(new Text("Science"),new Text(temp2));

temp2=temp+ "\t" + parts[5];
context.write(new Text("SocialScrience"),new Text(temp2));


}
}

___________________________________________________________________________

TopperReduce.java

package TopperPackage;

import java.io.IOException;

import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

 

public class TopperReduce extends Reducer<Text, Text, Text, Text> {
int temp;
private String[] names;
private int[] marks;
public void reduce(Text key, Iterable<Text> values, Context context)
throws IOException, InterruptedException {
String top = "";
int count =0,topmark;
marks = null;
String befsplit;
String[] parts=null;
names = null;
for (Text t : values)
{
befsplit= t.toString();
parts = befsplit.split("\t");
names[count]=parts[0];
marks[count]=Integer.parseInt(parts[1]);
count = count+1;

}
topmark=calcTopper(marks);
top=names[topmark]+ "\t"+marks[topmark] ;
context.write(new Text(key), new Text(top));
}
public int calcTopper(int[] marks)
{
int count=marks.length;
temp=((marks[1]));
int i=0;
for (i=1;i<=(count-2);i++)
{
if(temp < marks[i+1])
{
temp = marks[i+1];

}

}
return i;
}
}
'

 

The output ofthe mapper is 

Tamil Surender 87
English Surender 60
Maths Surender 50
Science Surender 50
SocialScrience Surender 80
Tamil Raj 80
English Raj 70
Maths Raj 80
Science Raj 85
SocialScrience Raj 60
Tamil Anten 81
English Anten 60
Maths Anten 50
Science Anten 70
SocialScrience Anten 100
Tamil Dinesh 60
English Dinesh 90
Maths Dinesh 80
Science Dinesh 80
SocialScrience Dinesh 70
Tamil Priya 80
English Priya 85
Maths Priya 91
Science Priya 60
SocialScrience Priya 75

 

The error is
cloudera@cloudera-vm:~/Jarfiles$ hadoop jar TopperMain.jar /user/cloudera/inputfiles/topper/topperinput.txt /user/cloudera/outputfiles/topper/
14/08/24 23:17:07 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
14/08/24 23:17:08 INFO input.FileInputFormat: Total input paths to process : 1
14/08/24 23:17:09 INFO mapred.JobClient: Running job: job_201408241907_0012
14/08/24 23:17:10 INFO mapred.JobClient: map 0% reduce 0%
14/08/24 23:17:49 INFO mapred.JobClient: map 100% reduce 0%
14/08/24 23:18:03 INFO mapred.JobClient: Task Id : attempt_201408241907_0012_r_000000_0, Status : FAILED
java.lang.NullPointerException
at TopperPackage.TopperReduce.reduce(TopperReduce.java:25)
at TopperPackage.TopperReduce.reduce(TopperReduce.java:1)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:571)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:413)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
at org.apache.hadoop.mapred.Child.main(Child.java:262)

attempt_201408241907_0012_r_000000_0: log4j:WARN No appenders could be found for logger (org.apache.hadoop.hdfs.DFSClient).
attempt_201408241907_0012_r_000000_0: log4j:WARN Please initialize the log4j system properly.
14/08/24 23:18:22 INFO mapred.JobClient: Task Id : attempt_201408241907_0012_r_000000_1, Status : FAILED
java.lang.NullPointerException
at TopperPackage.TopperReduce.reduce(TopperReduce.java:25)
at TopperPackage.TopperReduce.reduce(TopperReduce.java:1)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:571)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:413)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
at org.apache.hadoop.mapred.Child.main(Child.java:262)

attempt_201408241907_0012_r_000000_1: log4j:WARN No appenders could be found for logger (org.apache.hadoop.hdfs.DFSClient).
attempt_201408241907_0012_r_000000_1: log4j:WARN Please initialize the log4j system properly.
14/08/24 23:18:40 INFO mapred.JobClient: Task Id : attempt_201408241907_0012_r_000000_2, Status : FAILED
java.lang.NullPointerException
at TopperPackage.TopperReduce.reduce(TopperReduce.java:25)
at TopperPackage.TopperReduce.reduce(TopperReduce.java:1)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:571)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:413)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
at org.apache.hadoop.mapred.Child.main(Child.java:262)

attempt_201408241907_0012_r_000000_2: log4j:WARN No appenders could be found for logger (org.apache.hadoop.hdfs.DFSClient).
attempt_201408241907_0012_r_000000_2: log4j:WARN Please initialize the log4j system properly.
14/08/24 23:19:00 INFO mapred.JobClient: Job complete: job_201408241907_0012
14/08/24 23:19:00 INFO mapred.JobClient: Counters: 17
14/08/24 23:19:00 INFO mapred.JobClient: Job Counters
14/08/24 23:19:00 INFO mapred.JobClient: Launched reduce tasks=4
14/08/24 23:19:00 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=40782
14/08/24 23:19:00 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
14/08/24 23:19:00 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
14/08/24 23:19:00 INFO mapred.JobClient: Launched map tasks=1
14/08/24 23:19:00 INFO mapred.JobClient: Data-local map tasks=1
14/08/24 23:19:00 INFO mapred.JobClient: Failed reduce tasks=1
14/08/24 23:19:00 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=66564
14/08/24 23:19:00 INFO mapred.JobClient: FileSystemCounters
14/08/24 23:19:00 INFO mapred.JobClient: HDFS_BYTES_READ=240
14/08/24 23:19:00 INFO mapred.JobClient: FILE_BYTES_WRITTEN=54247
14/08/24 23:19:00 INFO mapred.JobClient: Map-Reduce Framework
14/08/24 23:19:00 INFO mapred.JobClient: Combine output records=0
14/08/24 23:19:00 INFO mapred.JobClient: Map input records=5
14/08/24 23:19:00 INFO mapred.JobClient: Spilled Records=25
14/08/24 23:19:00 INFO mapred.JobClient: Map output bytes=451
14/08/24 23:19:00 INFO mapred.JobClient: Combine input records=0
14/08/24 23:19:00 INFO mapred.JobClient: Map output records=25
14/08/24 23:19:00 INFO mapred.JobClient: SPLIT_RAW_BYTES=129

 

 

please advise me where i am wrong and help me to debug the above program in CDH3 (step by step guide)

Highlighted
Posts: 1,886
Kudos: 425
Solutions: 300
Registered: ‎07-31-2013

Re: Null pointer exception in reducer

What precisely is your Reducer's line 25? Its difficult to tell from inline pasted sources (comes up as a "}" character, which would be irrelevant to the NPE).

Can you instead paste the sources in proper form onto a service such as pastebin.com, and share the generated link?
Announcements