Member since
06-22-2017
5
Posts
0
Kudos Received
0
Solutions
06-25-2017
06:26 PM
thanks for your reply Kalai. In my submit command I am using spark in yarn mode (--master yarn) not stand alone mode so I do not think it will use this configuration. Also as far as I understand this sets the master node in stand alone mode and has nothing to do with the driver port. Anyway to confirm that I tried to make the changes you metioned and it still ran on random ports.
... View more
06-23-2017
11:04 PM
I am not able to do that since the driver port change randomly on each job submit. Can I fix the port value? Also, since the client is connecting to the driver on the local API I don't think a firewall is the problem. right?
... View more
06-23-2017
10:56 PM
Yes. I checked that in Ambari and Spark is installed. Here is a screen shot:
... View more
06-23-2017
03:21 PM
Thank you for responding Mark. Ambari does not show any alerts regarding spark. Is there any other way to make sure the cluster is running. Also I run it in local mode like this: /bin/spark-submit --class example.Hello --jars ... --master local --supervise --conf spark.yarn.driver.memoryOverhead=1024 and it ran without any problems.
... View more
06-22-2017
10:41 PM
When I submit a spark job to the cluster it failed and gives me the following error in the log file: Caused by: java.io.IOException: Failed to connect to /0.0.0.0:35994
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:232)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:182)
at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:197)
at org.apache.spark.rpc.netty.Outbox$anon$1.call(Outbox.scala:194)
at org.apache.spark.rpc.netty.Outbox$anon$1.call(Outbox.scala:190)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745) Which I guess means it failed to connect to the driver. I tried to increase "spark.yarn.executor.memoryOverhead" parameter but it doesn't work. This is the submit command I use: /bin/spark-submit --class example.Hello --jars ... --master yarn --deploy-mode cluster --supervise --conf spark.yarn.driver.memoryOverhead=1024 ...(jar file path) I am using HDP-2.6.1.0 and spark 2.1.1
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache YARN