Archives of Support Questions (Read Only)

This is an archived board for historical reference. Information and links may no longer be available or relevant
Announcements
This board is archived and read-only for historical reference. To ask a new question, please post a new topic on the appropriate active board.

Why do I get NoSuchFieldError: DEFAULT_MAPREDUCE_APPLICATION_CLASSPATH for hadoop-streaming?

avatar

I'm trying to run a hadoop-streaming job, which runs a wine application using a bash shell script. When I run job, the system returns java.lang.NoSuchFieldError:  DEFAULT_MAPREDUCE_APPLICATION_CLASSPATH. I'm running CDH 5.3. Is this something that would show up if your are running hadoop-streaming MR1 vs MR2?

 

env

hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming.jar \
                -D mapreduce.input.fileinputformat.split.minsize=1000737418240 \
                -D mapreduce.job.reduces=0  \
                -input hdfs://xdata/data/nxcore/*.XA.nxc  \
                -output hdfs://xdata/data/nxcore/processed \
                -mapper nxprocess.sh \
                -file /home/nxcore/.wine/drive_c/Projects/nxcore/nxprocess.sh \
                -verbose

 I get the following error:

 

15/02/06 16:20:18 INFO mapreduce.JobSubmitter: number of splits:5
15/02/06 16:20:18 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1419361536499_0082
15/02/06 16:20:18 INFO mapreduce.JobSubmitter: Cleaning up the staging area /user/nxcore/.staging/job_1419361536499_0082
Exception in thread "main" java.lang.NoSuchFieldError: DEFAULT_MAPREDUCE_APPLICATION_CLASSPATH
        at org.apache.hadoop.mapreduce.v2.util.MRApps.setMRFrameworkClasspath(MRApps.java:218)
        at org.apache.hadoop.mapreduce.v2.util.MRApps.setClasspath(MRApps.java:250)
        at org.apache.hadoop.mapred.YARNRunner.createApplicationSubmissionContext(YARNRunner.java:460)
        at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:284)
        at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:407)
        at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1269)
        at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1266)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:1266)
        at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:606)
        at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:601)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
        at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:601)
        at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:586)
        at org.apache.hadoop.streaming.StreamJob.submitAndMonitorJob(StreamJob.java:1014)
        at org.apache.hadoop.streaming.StreamJob.run(StreamJob.java:135)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
        at org.apache.hadoop.streaming.HadoopStreaming.main(HadoopStreaming.java:50)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:212)

TIA,

 

Matt

 

 

 

 

1 ACCEPTED SOLUTION

avatar

Scripts were including the wrong version of a jar file, which was left over from an upgrade.

View solution in original post

1 REPLY 1

avatar

Scripts were including the wrong version of a jar file, which was left over from an upgrade.