Created on 12-06-2020 02:52 PM - last edited on 12-16-2020 08:17 AM by VidyaSargur
I am trying to run a spark job via Talend tool. I am getting below error. Can anyone please suggest.
[INFO ]: org.apache.spark.deploy.yarn.Client -
client token: N/A
diagnostics: Application application_1607145350732_0015 failed 2 times due to AM Container for appattempt_1607145350732_0015_000002 exited with exitCode: 10
Failing this attempt.Diagnostics: [2020-12-06 15:49:09.302]Exception from container-launch.
Container id: container_1607145350732_0015_02_000001
Exit code: 10
[2020-12-06 15:49:09.305]Container exited with a non-zero exit code 10. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
[2020-12-06 15:49:09.305]Container exited with a non-zero exit code 10. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
For more detailed output, check the application tracking page: http://localhost:8088/cluster/app/application_1607145350732_0015 Then click on links to logs of each attempt.
. Failing the application.
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: root.users.appsuser
start time: 1607291334541
final status: FAILED
tracking URL: http://localhost:8088/cluster/app/application_1607145350732_0015
user: appsuser
[ERROR]: org.apache.spark.SparkContext - Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:85)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:173)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:509)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
at talend_developement.sparkproduct_cld_step6__0_1.sparkPRODUCT_CLD_STEP6_.runJobInTOS(sparkPRODUCT_CLD_STEP6_.java:1387)
at talend_developement.sparkproduct_cld_step6__0_1.sparkPRODUCT_CLD_STEP6_.main(sparkPRODUCT_CLD_STEP6_.java:1279)
[INFO ]: org.spark_project.jetty.server.AbstractConnector - Stopped Spark@62551ff6{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
[INFO ]: org.apache.spark.ui.SparkUI - Stopped Spark web UI at http://0.0.0.00:4040
[ERROR]: org.apache.spark.network.client.TransportClient - Failed to send RPC 6049494344693001682 to /0.0.0.168:42590: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
[ERROR]: org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint - Sending RequestExecutors(0,0,Map(),Set()) to AM was unsuccessful
java.io.IOException: Failed to send RPC 6049494344693001682 to /0.0.00.000:42590: java.nio.channels.ClosedChannelException
at org.apache.spark.network.client.TransportClient.lambda$sendRpc$2(TransportClient.java:237)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481)
at io.netty.util.concurrent.DefaultPromise.access$000(DefaultPromise.java:34)
at io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:431)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:399)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:446)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
[INFO ]: org.apache.spark.scheduler.cluster.SchedulerExtensionServices - Stopping SchedulerExtensionServices
(serviceOption=None,
services=List(),
started=false)
[ERROR]: org.apache.spark.util.Utils - Uncaught exception in thread main
org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:205)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.requestTotalExecutors(CoarseGrainedSchedulerBackend.scala:551)
at org.apache.spark.scheduler.cluster.YarnSchedulerBackend.stop(YarnSchedulerBackend.scala:93)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.stop(YarnClientSchedulerBackend.scala:151)
at org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:517)
at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1652)
at org.apache.spark.SparkContext$$anonfun$stop$8.apply$mcV$sp(SparkContext.scala:1921)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1317)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1920)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:587)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
at talend_developement.sparkproduct_cld_step6__0_1.sparkPRODUCT_CLD_STEP6_.runJobInTOS(sparkPRODUCT_CLD_STEP6_.java:1387)
at talend_developement.sparkproduct_cld_step6__0_1.sparkPRODUCT_CLD_STEP6_.main(sparkPRODUCT_CLD_STEP6_.java:1279)
Caused by: java.io.IOException: Failed to send RPC 6049494344693001682 to /0.0.00.000:42590: java.nio.channels.ClosedChannelException
at org.apache.spark.network.client.TransportClient.lambda$sendRpc$2(TransportClient.java:237)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481)
at io.netty.util.concurrent.DefaultPromise.access$000(DefaultPromise.java:34)
at io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:431)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:399)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:446)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
[INFO ]: org.apache.spark.MapOutputTrackerMasterEndpoint - MapOutputTrackerMasterEndpoint stopped!
Created 12-29-2020 09:18 AM
@Narahari Looking at the logs seems the connection issue.
As the container failed with below error:
[2020-12-06 15:49:09.305]Container exited with a non-zero exit code 10. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
And then tracing it further:
[ERROR]: org.apache.spark.SparkContext - Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
Leads us this channel bind exception.
[ERROR]: org.apache.spark.network.client.TransportClient - Failed to send RPC 6049494344693001682 to /10.4.37.168:42590: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
So perhaps you have to check if things are connecting to 10.4.37.168:42590 and then see from there.