Support Questions

Find answers, ask questions, and share your expertise

Who agreed with this topic

Issue on running spark application in Yarn-cluster mode

avatar
Explorer

When I execute the following in yarn-client mode its working fine and giving the result properly, but when i try to run in Yarn-cluster mode i am getting error

 

spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client /home/abc/spark/examples/lib/spark-examples_2.10-1.0.0-cdh5.1.0.jar 10

 The above code works fine, but when i execute the same code in yarn cluster mode i amgetting the following error.

 

14/10/07 09:40:24 INFO Client: Application report from ASM:
         application identifier: application_1412117173893_1150
         appId: 1150
         clientToAMToken: Token { kind: YARN_CLIENT_TOKEN, service:  }
         appDiagnostics:
         appMasterHost: N/A
         appQueue: root.default
         appMasterRpcPort: -1
         appStartTime: 1412689195537
         yarnAppState: ACCEPTED
         distributedFinalState: UNDEFINED
         appTrackingUrl: http://spark.abcd.com:8088/proxy/application_1412117173893_1150/
         appUser: abc
14/10/07 09:40:25 INFO Client: Application report from ASM:
         application identifier: application_1412117173893_1150
         appId: 1150
         clientToAMToken: null
         appDiagnostics: Application application_1412117173893_1150 failed 2 times due to AM Container for appattempt_1412117173893_1150_000002 exited with  exitCode: 1 due to: Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:511)
        at org.apache.hadoop.util.Shell.run(Shell.java:424)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:656)
        at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:279)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

main : command provided 1
main : user is abc
main : requested yarn user is abc

Container exited with a non-zero exit code 1
.Failing this attempt.. Failing the application.
         appMasterHost: N/A
         appQueue: root.default
         appMasterRpcPort: -1
         appStartTime: 1412689195537
         yarnAppState: FAILED
         distributedFinalState: FAILED
         appTrackingUrl: spark.abcd.com:8088/cluster/app/application_1412117173893_1150
         appUser: abc

 Where may be the problem? sometimes when i try to execute in yarn-cluster mode i am getting the following , but i dint see any result

14/10/08 01:51:57 INFO Client: Application report from ASM:
         application identifier: application_1412117173893_1442
         appId: 1442
         clientToAMToken: Token { kind: YARN_CLIENT_TOKEN, service:  }
         appDiagnostics:
         appMasterHost: spark.abcd.com
         appQueue: root.default
         appMasterRpcPort: 0
         appStartTime: 1412747485673
         yarnAppState: FINISHED
         distributedFinalState: SUCCEEDED
         appTrackingUrl: http://spark.abcd.com:8088/proxy/application_1412117173893_1442/A
         appUser: abc

 Thanks

Who agreed with this topic