Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Errors during Smoke Test - org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753)

Errors during Smoke Test - org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753)

New Contributor

Hi I am following this section to run a Mapreduce task using Using Terasort, sort 10GB of data.but it keeps failing with the errors below ->> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_installing_manually_book/content/smoke_te...

=> I am running this on a Virtual Box with 6 GB RAM and 2 CPU Cores

[client@rac1 ~]$ /usr/hdp/current/hadoop-client/bin/hadoop jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples-*.jar teragen 10000 tmp/teragenout

WARNING: Use "yarn jar" to launch YARN applications. 16/05/13 09:36:54 INFO client.RMProxy: Connecting to ResourceManager at rac1.mlg.oracle.com/192.168.56.21:8050 16/05/13 09:36:55 INFO terasort.TeraSort: Generating 10000 using 2 16/05/13 09:36:55 INFO mapreduce.JobSubmitter: number of splits:2 16/05/13 09:36:55 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1463124608398_0001 16/05/13 09:36:56 INFO impl.YarnClientImpl: Submitted application application_1463124608398_0001 16/05/13 09:36:56 INFO mapreduce.Job: The url to track the job: http://rac1.mlg.oracle.com:8088/proxy/application_1463124608398_0001/ 16/05/13 09:36:56 INFO mapreduce.Job: Running job: job_1463124608398_0001 16/05/13 09:37:02 INFO mapreduce.Job: Job job_1463124608398_0001 running in uber mode : false 16/05/13 09:37:02 INFO mapreduce.Job: map 0% reduce 0% 16/05/13 09:37:02 INFO mapreduce.Job: Job job_1463124608398_0001 failed with state FAILED due to: Application application_1463124608398_0001 failed 2 times due to AM Container for appattempt_1463124608398_0001_000002 exited with exitCode: 1 For more detailed output, check application tracking page:http://rac1.mlg.oracle.com:8088/cluster/app/application_1463124608398_0001Then, click on links to logs of each attempt. Diagnostics: Exception from container-launch. Container id: container_1463124608398_0001_02_000001 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:576) at org.apache.hadoop.util.Shell.run(Shell.java:487) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:303) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

Container exited with a non-zero exit code 1 Failing this attempt. Failing the application. 16/05/13 09:37:02 INFO mapreduce.Job: Counters: 0 [client@rac1 ~]$

Errors in >> [root@rac1 ~]# tail -f /var/log/hadoop/yarn/yarn-yarn-resourcemanager-rac1.mlg.oracle.com.log

2016-05-13 09:37:01,815 INFO rmapp.RMAppImpl (RMAppImpl.java:transition(1027)) - Application application_1463124608398_0001 failed 2 times due to AM Container for appattempt_1463124608398_0001_000002 exited with exitCode: 1 For more detailed output, check application tracking page:http://rac1.mlg.oracle.com:8088/cluster/app/application_1463124608398_0001Then, click on links to logs of each attempt. Diagnostics: Exception from container-launch. Container id: container_1463124608398_0001_02_000001 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:576) at org.apache.hadoop.util.Shell.run(Shell.java:487) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:303) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

Container exited with a non-zero exit code 1 Failing this attempt. Failing the application. 2016-05-13 09:37:01,816 INFO capacity.CapacityScheduler (CapacityScheduler.java:completedContainer(1562)) - Application attempt appattempt_1463124608398_0001_000002 released container container_1463124608398_0001_02_000001 on node: host: rac1.mlg.oracle.com:45454 #containers=0 available=<memory:10240, vCores:8> used=<memory:0, vCores:0> with event: FINISHED 2016-05-13 09:37:01,816 INFO capacity.CapacityScheduler (CapacityScheduler.java:doneApplicationAttempt(896)) - Application Attempt appattempt_1463124608398_0001_000002 is done. finalState=FAILED 2016-05-13 09:37:01,816 INFO scheduler.AppSchedulingInfo (AppSchedulingInfo.java:clearRequests(117)) - Application application_1463124608398_0001 requests cleared 2016-05-13 09:37:01,817 INFO capacity.LeafQueue (LeafQueue.java:removeApplicationAttempt(726)) - Application removed - appId: application_1463124608398_0001 user: client queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0 2016-05-13 09:37:01,822 INFO rmapp.RMAppImpl (RMAppImpl.java:handle(767)) - application_1463124608398_0001 State change from FINAL_SAVING to FAILED 2016-05-13 09:37:01,823 INFO capacity.ParentQueue (ParentQueue.java:removeApplication(372)) - Application removed - appId: application_1463124608398_0001 user: client leaf-queue of parent: root #applications: 0 2016-05-13 09:37:01,826 WARN resourcemanager.RMAuditLogger (RMAuditLogger.java:logFailure(323)) - USER=client OPERATION=Application Finished - Failed TARGET=RMAppManager RESULT=FAILURE DESCRIPTION=App failed with state: FAILED PERMISSIONS=Application application_1463124608398_0001 failed 2 times due to AM Container for appattempt_1463124608398_0001_000002 exited with exitCode: 1 For more detailed output, check application tracking page:http://rac1.mlg.oracle.com:8088/cluster/app/application_1463124608398_0001Then, click on links to logs of each attempt. Diagnostics: Exception from container-launch. Container id: container_1463124608398_0001_02_000001 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:576) at org.apache.hadoop.util.Shell.run(Shell.java:487) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:303) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

-> Errors in /var/log/hadoop/yarn/yarn-yarn-nodemanager-rac1.mlg.oracle.com.log

2016-05-13 09:36:59,128 WARN nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:launchContainer(224)) - Exit code from container container_1463124608398_0001_01_000001 is : 1 2016-05-13 09:36:59,129 WARN nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:launchContainer(230)) - Exception from container-launch with container ID: container_1463124608398_0001_01_000001 and exit code: 1 ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:576) at org.apache.hadoop.util.Shell.run(Shell.java:487) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:303) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

---------------------- http://rac1.mlg.oracle.com:8088/cluster/app/application_1463124608398_0001 ----------------------

User: client Name: TeraGen Application Type: MAPREDUCE Application Tags: YarnApplicationState: FAILED Queue: default FinalStatus Reported by AM: FAILED Started: Fri May 13 09:36:56 +0200 2016 Elapsed: 5sec Tracking URL: History Log Aggregation Status SUCCEEDED Diagnostics: Application application_1463124608398_0001 failed 2 times due to AM Container for appattempt_1463124608398_0001_000002 exited with exitCode: 1 For more detailed output, check application tracking page:http://rac1.mlg.oracle.com:8088/cluster/app/application_1463124608398_0001Then, click on links to logs of each attempt. Diagnostics: Exception from container-launch. Container id: container_1463124608398_0001_02_000001 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:576) at org.apache.hadoop.util.Shell.run(Shell.java:487) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:303) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 Failing this attempt. Failing the application.

-> Settings in $HADOOP_CONF_DIR/yarn-site.xml

[root@rac1 conf]# cat $HADOOP_CONF_DIR/yarn-site.xml <!--Thu Aug 15 20:46:53 2013--> <configuration> <property> <name>yarn.nodemanager.address</name> <value>0.0.0.0:45454</value> </property> <property> <name>yarn.nodemanager.local-dirs</name> <value>/grid/hadoop/yarn/local,/grid1/hadoop/yarn/local,/grid2/hadoop/yarn/local</value> </property> <property> <name>yarn.nodemanager.container-monitor.interval-ms</name> <value>3000</value> </property> <property> <name>yarn.resourcemanager.am.max-attempts</name> <value>2</value> </property> <property> <name>yarn.nodemanager.health-checker.script.timeout-ms</name> <value>60000</value> </property> <property> <name>yarn.nodemanager.remote-app-log-dir</name> <value>/app-logs</value> </property> <property> <name>yarn.log.server.url</name> <value>rac1.mlg.oracle.com:19888/jobhistory/logs</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>rac1.mlg.oracle.com:8141</value> </property> <property> <name>yarn.scheduler.maximum-allocation-mb</name> <value>6144</value> </property> <property> <name>yarn.nodemanager.vmem-pmem-ratio</name> <value>2.1</value> </property> <property> <name>yarn.nodemanager.delete.debug-delay-sec</name> <value>0</value> </property> <property> <name>yarn.nodemanager.health-checker.interval-ms</name> <value>135000</value> </property> <property> <name>yarn.nodemanager.resource.memory-mb</name> <value>10240</value> </property> <property> <name>yarn.nodemanager.log-dirs</name> <value>/grid/hadoop/yarn/logs,/grid1/hadoop/yarn/logs,/grid2/hadoop/yarn/logs</value> </property> <property> <name>yarn.nodemanager.log.retain-second</name> <value>604800</value> </property> <property> <name>yarn.resourcemanager.scheduler.class</name> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>rac1.mlg.oracle.com:8025</value> </property> <property> <name>yarn.application.classpath</name> <value>/etc/hadoop/conf,/usr/hdp/${hdp.version}/hadoop-client/*,/usr/hdp/${hdp.version}/hadoop-client/lib/*,/usr/hdp/${hdp.version}/hadoop-hdfs-client/*,/usr/hdp/${hdp.version}/hadoop-hdfs-client/lib/*,/usr/hdp/${hdp.version}/hadoop-yarn-client/*,/usr/hdp/${hdp.version}/hadoop-yarn-client/lib/*</value> </property> <property> <name>yarn.scheduler.minimum-allocation-mb</name> <value>100</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.log-aggregation.compression-type</name> <value>gz</value> </property> <property> <name>yarn.nodemanager.health-checker.script.path</name> <value>/etc/hadoop/conf/health_check</value> </property> <property> <name>yarn.nodemanager.container-executor.class</name> <value>org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor</value> </property> <property> <name>yarn.nodemanager.linux-container-executor.group</name> <value>hadoop</value> </property> <property> <name>yarn.log-aggregation.retain-seconds</name> <value>2592000</value> </property> <property> <name>yarn.nodemanager.remote-app-log-dir-suffix</name> <value>logs</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>rac1.mlg.oracle.com:8088</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>rac1.mlg.oracle.com:8050</value> </property> <property> <name>yarn.nodemanager.admin-env</name> <value>MALLOC_ARENA_MAX=$MALLOC_ARENA_MAX</value> </property> <property> <name>yarn.nodemanager.disk-health-checker.min-healthy-disks</name> <value>0.25</value> </property> <property> <name>yarn.log-aggregation-enable</name> <value>true</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>rac1.mlg.oracle.com:8030</value> </property> <property> <name>yarn.resourcemanager.work-preserving-recovery.enabled</name> <value>true</value> <description>Whether to enable work preserving recovery for the Resource Manager</description> </property> <property> <name>yarn.nodemanager.recovery.enabled</name> <value>true</value> <description>Whether to enable work preserving recovery for the Node Manager</description> </property> <property> <name>yarn.nodemanager.recovery.dir</name> <value>/var/run/hadoop/yarn/recovery</value> <description>The location for stored state on the Node Manager, if work preserving recovery is enabled</description> </property> <property> <name>yarn.timeline-service.webapp.address</name> <value>rac1.mlg.oracle.com:8188</value> </property> <property> <name>yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user</name> <value>yarn</value> </property> <property> <name>yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users</name> <value>true</value> </property> </configuration>

[root@rac1 conf]# cat /etc/hadoop/conf/container-executor.cfg #yarn.nodemanager.local-dirs=TODO-YARN-LOCAL-DIR #yarn.nodemanager.linux-container-executor.group=hadoop #yarn.nodemanager.log-dirs=TODO-YARN-LOG-DIR #banned.users=hfds,bin,0 yarn.nodemanager.local-dirs=/grid/hadoop/yarn/local,/grid1/hadoop/yarn/local,/grid2/hadoop/yarn/local yarn.nodemanager.log-dirs=/grid/hadoop/yarn/logs,/grid1/hadoop/yarn/logs,/grid2/hadoop/yarn/logs yarn.nodemanager.linux-container-executor.group=hadoop banned.users=hfds,yarn,mapred,bin,0 #Comma separated list of users who can not run applications (users who cannot run containerexecutor) allowed.system.users=foo,bar #Comma separated list of allowed system users min.user.id=99

3 REPLIES 3

Re: Errors during Smoke Test - org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753)

New Contributor

i have tracked the problem down to this main error ==>

Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster

[client@rac1 ~]$ /usr/hdp/current/hadoop-client/bin/hadoop jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples-*.jar teragen 10000 tmp/teragenout WARNING: Use "yarn jar" to launch YARN applications. 16/05/13 12:35:13 INFO client.RMProxy: Connecting to ResourceManager at rac1.mlg.oracle.com/192.168.56.21:8050 16/05/13 12:35:14 INFO terasort.TeraSort: Generating 10000 using 2 16/05/13 12:35:15 INFO mapreduce.JobSubmitter: number of splits:2 16/05/13 12:35:15 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1463132481086_0002 16/05/13 12:35:15 INFO impl.YarnClientImpl: Submitted application application_1463132481086_0002 16/05/13 12:35:15 INFO mapreduce.Job: The url to track the job: http://rac1.mlg.oracle.com:8088/proxy/application_1463132481086_0002/ 16/05/13 12:35:15 INFO mapreduce.Job: Running job: job_1463132481086_0002 16/05/13 12:35:19 INFO mapreduce.Job: Job job_1463132481086_0002 running in uber mode : false 16/05/13 12:35:19 INFO mapreduce.Job: map 0% reduce 0% 16/05/13 12:35:19 INFO mapreduce.Job: Job job_1463132481086_0002 failed with state FAILED due to: Application application_1463132481086_0002 failed 2 times due to AM Container for appattempt_1463132481086_0002_000002 exited with exitCode: 1 For more detailed output, check application tracking page:http://rac1.mlg.oracle.com:8088/cluster/app/application_1463132481086_0002Then, click on links to logs of each attempt. Diagnostics: Exception from container-launch. Container id: container_1463132481086_0002_02_000001 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:576) at org.apache.hadoop.util.Shell.run(Shell.java:487) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:303) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 Failing this attempt. Failing the application. 16/05/13 12:35:19 INFO mapreduce.Job: Counters: 0 [client@rac1 ~]$ [client@rac1 ~]$ yarn logs -applicationId application_1463132481086_0002 | grep -i error 16/05/13 12:35:41 INFO client.RMProxy: Connecting to ResourceManager at rac1.mlg.oracle.com/192.168.56.21:8050 16/05/13 12:35:42 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library 16/05/13 12:35:42 INFO compress.CodecPool: Got brand-new decompressor [.deflate] hadoop_shell_errorcode=$? if [ $hadoop_shell_errorcode -ne 0 ] exit $hadoop_shell_errorcode hadoop_shell_errorcode=$? if [ $hadoop_shell_errorcode -ne 0 ] exit $hadoop_shell_errorcode hadoop_shell_errorcode=$? if [ $hadoop_shell_errorcode -ne 0 ] exit $hadoop_shell_errorcode hadoop_shell_errorcode=$? if [ $hadoop_shell_errorcode -ne 0 ] exit $hadoop_shell_errorcode hadoop_shell_errorcode=$? if [ $hadoop_shell_errorcode -ne 0 ] exit $hadoop_shell_errorcode hadoop_shell_errorcode=$? if [ $hadoop_shell_errorcode -ne 0 ] exit $hadoop_shell_errorcode hadoop_shell_errorcode=$? if [ $hadoop_shell_errorcode -ne 0 ] exit $hadoop_shell_errorcode hadoop_shell_errorcode=$? if [ $hadoop_shell_errorcode -ne 0 ] exit $hadoop_shell_errorcode Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster hadoop_shell_errorcode=$? if [ $hadoop_shell_errorcode -ne 0 ] exit $hadoop_shell_errorcode hadoop_shell_errorcode=$? if [ $hadoop_shell_errorcode -ne 0 ] exit $hadoop_shell_errorcode hadoop_shell_errorcode=$? if [ $hadoop_shell_errorcode -ne 0 ] exit $hadoop_shell_errorcode hadoop_shell_errorcode=$? if [ $hadoop_shell_errorcode -ne 0 ] exit $hadoop_shell_errorcode hadoop_shell_errorcode=$? if [ $hadoop_shell_errorcode -ne 0 ] exit $hadoop_shell_errorcode hadoop_shell_errorcode=$? if [ $hadoop_shell_errorcode -ne 0 ] exit $hadoop_shell_errorcode hadoop_shell_errorcode=$? if [ $hadoop_shell_errorcode -ne 0 ] exit $hadoop_shell_errorcode hadoop_shell_errorcode=$? if [ $hadoop_shell_errorcode -ne 0 ] exit $hadoop_shell_errorcode Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster [client@rac1 ~]$

WHERE Can I find this JAVA Class 'org.apache.hadoop.mapreduce.v2.app.MRAppMaster' as I have tried setting the CLASSPATH correctly both in yarn-site.xml and in the environment but still get this error

[root@rac1 conf]# vi yarn-site.xml <property> <name>yarn.application.classpath</name> <value>/usr/hdp/current/hadoop-client/lib/*,/usr/hdp/current/hadoop-client/*,/usr/hdp/2.4.0.0-169/hadoop/client/*,/usr/hdp/2.4.0.0-169/hadoop-mapreduce/*,/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/*,/usr/hdp/2.4.0.0-169/hadoop-hdfs/*,/usr/hdp/2.4.0.0-169/hadoop-yarn/*,$HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,$HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*,$HADOOP_YARN_HOME/*,$HADOOP_YARN_HOME/lib/*,/etc/hadoop/conf,/usr/hdp/${hdp.version}/hadoop-client/*,/usr/hdp/${hdp.version}/hadoop-client/lib/*,/usr/hdp/${hdp.version}/hadoop-hdfs-client/*,/usr/hdp/${hdp.version}/hadoop-hdfs-client/lib/*,/usr/hdp/${hdp.version}/hadoop-yarn-client/*,/usr/hdp/${hdp.version}/hadoop-yarn-client/lib/*</value> </property>

[client@rac1 ~]$ export CLASSPATH=/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-client/*:/usr/hdp/2.4.0.0-169/hadoop/client/*:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/*:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/*:/usr/hdp/2.4.0.0-169/hadoop-hdfs/*:/usr/hdp/2.4.0.0-169/hadoop-yarn/*:$HADOOP_HDFS_HOME/*:$HADOOP_HDFS_HOME/lib/*:$HADOOP_MAPRED_HOME/*:$HADOOP_MAPRED_HOME/lib/*:$HADOOP_YARN_HOME/*:$HADOOP_YARN_HOME/lib/*:/etc/hadoop/conf:/usr/hdp/2.4.0.0-169/hadoop-client/*:/usr/hdp/2.4.0.0-169/hadoop-client/lib/*:/usr/hdp/2.4.0.0-169/hadoop-hdfs-client/*:/usr/hdp/2.4.0.0-169/hadoop-hdfs-client/lib/*:/usr/hdp/2.4.0.0-169/hadoop-yarn-client/*:/usr/hdp/2.4.0.0-169/hadoop-yarn-client/lib/* [client@rac1 ~]$ echo $CLASSPATH /usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-client/*:/usr/hdp/2.4.0.0-169/hadoop/client/*:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/*:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/*:/usr/hdp/2.4.0.0-169/hadoop-hdfs/*:/usr/hdp/2.4.0.0-169/hadoop-yarn/*:/*:/lib/*:/*:/lib/*:/*:/lib/*:/etc/hadoop/conf:/usr/hdp/2.4.0.0-169/hadoop-client/*:/usr/hdp/2.4.0.0-169/hadoop-client/lib/*:/usr/hdp/2.4.0.0-169/hadoop-hdfs-client/*:/usr/hdp/2.4.0.0-169/hadoop-hdfs-client/lib/*:/usr/hdp/2.4.0.0-169/hadoop-yarn-client/*:/usr/hdp/2.4.0.0-169/hadoop-yarn-client/lib/*

Re: Errors during Smoke Test - org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753)

New Contributor

WHERE Can I find this JAVA Class 'org.apache.hadoop.mapreduce.v2.app.MRAppMaster' as I have tried setting the CLASSPATH correctly both in yarn-site.xml and in the environment but still get this error

Re: Errors during Smoke Test - org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753)

New Contributor

I have lost the will to make this work now after numerous attempts...Anyone in the Hadoop community willing to help this poor soul ? -------------- 1) I have tried numerous classpath changes within both these files but can't get this Job to run as it fails with the same errors ==> yarn-site.xml ( <name>yarn.application.classpath</name>) and mapred-site.xml ( <name>mapreduce.application.classpath</name>) -------------- [client@rac1 ~]$ /usr/hdp/current/hadoop-client/bin/hadoop jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples-*.jar teragen 100 tmp/teragenout7 WARNING: Use "yarn jar" to launch YARN applications. 16/05/15 09:30:47 INFO client.RMProxy: Connecting to ResourceManager at rac1.mlg.oracle.com/192.168.56.21:8050 16/05/15 09:30:48 INFO terasort.TeraSort: Generating 100 using 2 16/05/15 09:30:48 INFO mapreduce.JobSubmitter: number of splits:2 16/05/15 09:30:48 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1463296102675_0007 16/05/15 09:30:48 INFO impl.YarnClientImpl: Submitted application application_1463296102675_0007 16/05/15 09:30:48 INFO mapreduce.Job: The url to track the job: http://rac1.mlg.oracle.com:8088/proxy/application_1463296102675_0007/ 16/05/15 09:30:48 INFO mapreduce.Job: Running job: job_1463296102675_0007 16/05/15 09:30:58 INFO mapreduce.Job: Job job_1463296102675_0007 running in uber mode : false 16/05/15 09:30:58 INFO mapreduce.Job: map 0% reduce 0% 16/05/15 09:31:00 INFO ipc.Client: Retrying connect to server: rac1.mlg.oracle.com/192.168.56.21:9044. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS) 16/05/15 09:31:01 INFO ipc.Client: Retrying connect to server: rac1.mlg.oracle.com/192.168.56.21:9044. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS) 16/05/15 09:31:02 INFO ipc.Client: Retrying connect to server: rac1.mlg.oracle.com/192.168.56.21:9044. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS) 16/05/15 09:31:09 INFO ipc.Client: Retrying connect to server: rac1.mlg.oracle.com/192.168.56.21:41394. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS) 16/05/15 09:31:10 INFO ipc.Client: Retrying connect to server: rac1.mlg.oracle.com/192.168.56.21:41394. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS) 16/05/15 09:31:11 INFO ipc.Client: Retrying connect to server: rac1.mlg.oracle.com/192.168.56.21:41394. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS) 16/05/15 09:31:11 INFO mapreduce.Job: Job job_1463296102675_0007 failed with state FAILED due to: Application application_1463296102675_0007 failed 2 times due to AM Container for appattempt_1463296102675_0007_000002 exited with exitCode: 255 For more detailed output, check application tracking page:http://rac1.mlg.oracle.com:8088/cluster/app/application_1463296102675_0007Then, click on links to logs of each attempt. Diagnostics: Exception from container-launch. Container id: container_1463296102675_0007_02_000001 Exit code: 255 Stack trace: ExitCodeException exitCode=255: at org.apache.hadoop.util.Shell.runCommand(Shell.java:576) at org.apache.hadoop.util.Shell.run(Shell.java:487) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:303) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 255 Failing this attempt. Failing the application. 16/05/15 09:31:11 INFO mapreduce.Job: Counters: 0 [client@rac1 ~]$ [client@rac1 ~]$ yarn logs -applicationId application_1463296102675_0006 |grep -i error Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster -------------- 2) Have also set the Hadoop environment variables with the 'client' user environment (bash_profile) and also manually set the CLASSPATH but the Job still fails with -------------- -------------- 3) Also while trying to modfify the classpath in these config files I have hit 2 seperate errors => -------------- [client@rac1 ~]$ yarn logs -applicationId application_1463312606793_0004 ---------- error #1 ---------- 2016-05-15 14:05:56,611 FATAL [AsyncDispatcher event handler] org.apache.hadoop.yarn.event.AsyncDispatcher: Error in dispatcher thread java.lang.IllegalArgumentException: Unable to parse '/hdp/apps/${hdp.version}/mapreduce/mapreduce.tar.gz#mr-framework' as a URI, check the setting for mapreduce.application.framework.path ---------- error #2 ---------- 2016-05-15 13:54:36,531 INFO [main] org.apache.hadoop.service.AbstractService: Service org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state INITED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.io.IOException: No FileSystem for scheme: hdfs org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.io.IOException: No FileSystem for scheme: hdfs Caused by: java.io.IOException: No FileSystem for scheme: hdfs

Don't have an account?
Coming from Hortonworks? Activate your account here