Created on 07-06-2017 03:39 PM - edited 09-16-2022 04:53 AM
When our Spark job starts we are getting the following stack trace and we are wondering what setting we could adjust to raise the value above 10 seconds.
17/07/06 14:57:41 INFO Remoting: Starting remoting Exception in thread "main" java.lang.reflect.UndeclaredThrowableException at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1727) at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:68) at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:151) at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:253) at org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala) Caused by: java.util.concurrent.TimeoutException: Futures timed out after [10000 milliseconds] at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) at scala.concurrent.Await$anonfun$result$1.apply(package.scala:107) at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53) at scala.concurrent.Await$.result(package.scala:107) at akka.remote.Remoting.start(Remoting.scala:179) at akka.remote.RemoteActorRefProvider.init(RemoteActorRefProvider.scala:184) at akka.actor.ActorSystemImpl.liftedTree2$1(ActorSystem.scala:620) at akka.actor.ActorSystemImpl._start$lzycompute(ActorSystem.scala:617) at akka.actor.ActorSystemImpl._start(ActorSystem.scala:617) at akka.actor.ActorSystemImpl.start(ActorSystem.scala:634) at akka.actor.ActorSystem$.apply(ActorSystem.scala:142) at akka.actor.ActorSystem$.apply(ActorSystem.scala:119) at org.apache.spark.util.AkkaUtils$.org$apache$spark$util$AkkaUtils$doCreateActorSystem(AkkaUtils.scala:121) at org.apache.spark.util.AkkaUtils$anonfun$1.apply(AkkaUtils.scala:53) at org.apache.spark.util.AkkaUtils$anonfun$1.apply(AkkaUtils.scala:52) at org.apache.spark.util.Utils$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:1988) at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141) at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1979) at org.apache.spark.util.AkkaUtils$.createActorSystem(AkkaUtils.scala:55) at org.apache.spark.SparkEnv$.create(SparkEnv.scala:266) at org.apache.spark.SparkEnv$.createExecutorEnv(SparkEnv.scala:217) at org.apache.spark.executor.CoarseGrainedExecutorBackend$anonfun$run$1.apply$mcV$sp(CoarseGrainedExecutorBackend.scala:186) at org.apache.spark.deploy.SparkHadoopUtil$anon$1.run(SparkHadoopUtil.scala:69) at org.apache.spark.deploy.SparkHadoopUtil$anon$1.run(SparkHadoopUtil.scala:68) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709) ... 4 more
Created 07-10-2017 06:15 AM
can you add more information like- the command that throws this error ?, also which log did you get this error from ?
Created 07-10-2017 06:40 AM
1st. If you were executed spark cmd with master(local), then check the connection host and port in that local server.
2nd. Check your firewall & iptables status whether it is of or off.