Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

How access to Spark Web UI ?

avatar
Contributor

Hi,

I have a self-contained application with Spark 1.4.1, scala 2.11 and sbt 0.13.14. I run my application by sbt run then sbt package to genere a file jar.

I don't know where my application is runing, how much number of servers and into who cluster. I know that from Spark UI I can access to the interface, where I can see all my servers and I can change the sizes of a memory of the executor and driver. But the problem I don't know how I can access to a Spark web UI ?

I tried by 127.0.0.1:4040 but the page is inaccessible, when runing of my application it display that my driver in this address 10.0.2.15, I tried also by 10.0.2.15:4040 but in vein. Knowing that I'm using the Apache Ambari where I can access to all the clusters of hadoop ( I use Hadoop because my data is stored into HDFS).

Can you please tell me how I can access to the Saprk UI.

Thank you.

24 REPLIES 24

avatar
Super Guru
@Alicia Alicia

it looks that you are running your sandbox is NAT mode thats why you are not able to access the host 10.0.2.15, could you please try pinging 10.0.2.15 from command line to see if you are able to connect

avatar
Super Guru

@Alicia Alicia ya my guess by right you are running sandbox in NAT mode, you can still access the history server web ui on the address http://localhost:18080 because port forwarding rule is configured for it, can you try this and confirm.

to access web ui on port 4040 you need to configure portforwarding rule in your virtualbox.

avatar
Contributor

Sorry no, this small code take nearly 30sec in running without Thread.sleep(...) . You can see in this following the result of the running, I took just the half or more, execuse me for this reply I think it's very long, but really I can't understand why all this display for a small code, may be an error in the configuration in Hadoop or Spark?

Please can you tell me why all this, I remark that the executor it run many times, it's normaly ?

Thank you.

16/12/24 16:29:21 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks out of 4 blocks
16/12/24 16:29:21 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 1 ms
16/12/24 16:29:21 INFO Executor: Finished task 1.0 in stage 11.0 (TID 23). 1689 bytes result sent to driver
16/12/24 16:29:21 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 20 ms
16/12/24 16:29:23 INFO DAGScheduler: looking for newly runnable stages
16/12/24 16:29:23 INFO DAGScheduler: running: Set()
16/12/24 16:29:23 INFO DAGScheduler: waiting: Set(ShuffleMapStage 15, ShuffleMapStage 12, ShuffleMapStage 16, ShuffleMapStage 13, ShuffleMapStage 17, ResultStage 18, ShuffleMapStage 14)

16/12/24 16:29:24 INFO Executor: Running task 0.0 in stage 12.0 (TID 26)
16/12/24 16:29:24 INFO Executor: Running task 1.0 in stage 12.0 (TID 27)
16/12/24 16:29:24 INFO Executor: Running task 2.0 in stage 12.0 (TID 28)
16/12/24 16:29:24 INFO Executor: Running task 3.0 in stage 12.0 (TID 29)
16/12/24 16:29:24 INFO BlockManager: Found block rdd_16_1 locally
16/12/24 16:29:24 INFO BlockManager: Found block rdd_15_1 locally
16/12/24 16:29:24 INFO BlockManager: Found block rdd_16_2 locally
16/12/24 16:29:24 INFO BlockManager: Found block rdd_15_2 locally
16/12/24 16:29:24 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks out of 4 blocks
16/12/24 16:29:24 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 1 ms
16/12/24 16:29:24 INFO BlockManager: Found block rdd_16_3 locally
16/12/24 16:29:24 INFO Executor: Running task 1.0 in stage 13.0 (TID 31)
16/12/24 16:29:24 INFO Executor: Running task 0.0 in stage 13.0 (TID 30)
16/12/24 16:29:24 INFO Executor: Running task 2.0 in stage 13.0 (TID 32)
16/12/24 16:29:24 INFO Executor: Running task 3.0 in stage 13.0 (TID 33)
16/12/24 16:29:24 INFO BlockManager: Found block rdd_21_0 locally
16/12/24 16:29:24 INFO BlockManager: Found block rdd_20_0 locally
16/12/24 16:29:24 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks out of 4 blocks
16/12/24 16:29:24 INFO BlockManager: Found block rdd_20_2 locally
16/12/24 16:29:24 INFO Executor: Finished task 3.0 in stage 13.0 (TID 33). 1689 bytes result sent to driver
16/12/24 16:29:24 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks out of 4
16/12/24 16:29:25 INFO TaskSetManager: Starting task 1.0 in stage 14.0 (TID 35, localhost, partition 1, PROCESS_LOCAL, 5377 bytes)
16/12/24 16:29:25 INFO Executor: Running task 0.0 in stage 14.0 (TID 34)
16/12/24 16:29:25 INFO Executor: Running task 2.0 in stage 14.0 (TID 36)
16/12/24 16:29:25 INFO Executor: Running task 3.0 in stage 14.0 (TID 37)
16/12/24 16:29:25 INFO BlockManager: Found block rdd_16_1 locally
sk 3.0 in stage 16.0 (TID 45, localhost, partition 3, PROCESS_LOCAL, 5377 bytes)
16/12/24 16:29:26 INFO Executor: Running task 0.0 in stage 16.0 (TID 42)
16/12/24 16:29:26 INFO Executor: Running task 1.0 in stage 16.0 (TID 43)
16/12/24 16:29:26 INFO Executor: Running task 2.0 in stage 16.0 (TID 44)
16/12/24 16:29:29 INFO Executor: Running task 3.0 in stage 16.0 (TID 45)
16/12/24 16:29:29 INFO BlockManagerInfo: Removed broadcast_13_piece0 on 10.0.2.15:46654 in memory (size: 6.0 KB, free: 348.1 MB)
 in memory (size: 5.5 KB, free: 348.1 MB)
16/12/24 16:29:29 INFO Executor: Running task 0.0 in stage 18.0 (TID 50)
16/12/24 16:29:29 INFO Executor: Running task 2.0 in stage 18.0 (TID 52)
16/12/24 16:29:29 INFO Executor: Running task 3.0 in stage 18.0 (TID 53)
16/12/24 16:29:29 INFO Executor: Running task 1.0 in stage 18.0 (TID 51)
16/12/24 16:29:30 INFO BlockManager: Found block rdd_15_3 locally
16/12/24 16:29:30 INFO BlockManager: Found block rdd_15_3 locally
16/12/24 16:29:30 INFO BlockManager: Found block rdd_15_1 locally
16/12/24 16:29:30 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks out of 4 blocks
16/12/24 16:29:30 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 1 ms
16/12/24 16:29:30 INFO BlockManager: Found block rdd_15_1 locally
16/12/24 16:29:30 INFO BlockManager: Found block rdd_15_2 locally
16/12/24 16:29:30 INFO BlockManager: Found block rdd_15_2 locally
16/12/24 16:29:30 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks out of 4 blocks
16/12/24 16:29:30 INFO Executor: Running task 1.0 in stage 30.0 (TID 55)
16/12/24 16:29:30 INFO Executor: Running task 2.0 in stage 30.0 (TID 56)
16/12/24 16:29:30 INFO Executor: Running task 0.0 in stage 30.0 (TID 54)
16/12/24 16:29:30 INFO Executor: Running task 3.0 in stage 30.0 (TID 57)
16/12/24 16:29:30 INFO BlockManager: Found block rdd_20_1 locally
16/12/24 16:29:30 INFO BlockManager: Found block rdd_20_1 locally
16/12/24 16:29:30 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
16/12/24 16:29:30 INFO MemoryStore: Block rdd_107_1 stored as values in memory (estimated size 16.0 B, free 347.7 MB)
16/12/24 16:29:38 INFO MemoryStore: MemoryStore cleared
16/12/24 16:29:38 INFO BlockManager: BlockManager stopped
16/12/24 16:29:38 INFO BlockManagerMaster: BlockManagerMaster stopped
16/12/24 16:29:38 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/12/24 16:29:39 INFO SparkContext: Successfully stopped SparkContext
[success] Total time: 164 s, completed Dec 24, 2016 4:29:42 PM
16/12/24 16:29:42 INFO ShutdownHookManager: Shutdown hook called
16/12/24 16:29:42 INFO ShutdownHookManager: Deleting directory /tmp/spark-1d6bc2e8-5756-4107-82ee-8950cb6c5875
[root@sandbox projectFilms]# 

avatar
Contributor

@Rajkumar Singh in the running it display like this:

 INFO BlockManagerInfo: Added broadcast_20_piece0 in memory on 10.0.2.15:44895 (size: 6.7 KB, free: 348.1 MB)

INFO SparkUI: Stopped Spark web UI at http://10.0.2.15:4040

My code is running succesfully just it take a lot time.

I tried by http://10.0.15:4040 and also I changed the port 4041 and 4042 to acces on the application logs, but the page is inaccessible.

you told me can be access from this URL : http://:18080/>:18080/ but I don't have the port 18080 and how I can know the <hostname_spark_history_server> ?

Thank you.

avatar
Contributor

@Rajkumar Singh I checked it in cmd : ping 10.0.2.15, this is the result:

Send a request 'Ping' 10.0.2.15 with 32 bytes of data

Waiting time exceeded

Waiting time exceeded

Waiting time exceeded

Waiting time exceeded

Ping statistics for 10.0.2.15: Packages: sent = 4, received = 0, lost = 4 (100% loss)

I can't access by this address.

avatar
Contributor

I logged by http://localhost:18080/, it display this result, I think that I should change the logging directory.

1.4.1 History Server

  • Timeline Service Location: http://sandbox.hortonworks.com:8188/
  • Last Updated: Dec 24, 2016 7:22:57 PM UTC
  • Service Started: Dec 24, 2016 12:22:54 PM UTC
  • Current Time: Dec 24, 2016 7:23:01 PM UTC
  • Timeline Service: Timeline service is enabled
  • History Provider: Apache Hadoop YARN Timeline Service

No completed applications found!

Did you specify the correct logging directory? Please verify your setting of spark.history.fs.logDirectory and whether you have the permissions to access it. It is also possible that your application did not run to completion or did not stop the SparkContext.

Show incomplete applications

avatar
Master Guru

4040 is only when running you have to look at the history server.

See: https://spark.apache.org/docs/1.4.1/monitoring.html

As shown in this file you must specify the logging directory and a few parameters for these to be stored.

You can look at jobs after they finished using the history monitor

./sbin/start-history-server.sh

You ran as 4 threads

local[4]

Run Spark locally with K worker threads (which should be set to the number of cores on your machine).

You are running local Spark

Spark 1.4.1 is very old and you are running locally, why not upgrade to 1.6.2 or 2.x?

If you are running in HDP 2.4 or HDP 2.5, history server is managed for you and running by default. If not start it with ambari. Also better to run under YARN.

avatar
Contributor

Thank you for your answer. I'm a student and i'm beginner in Spark (not just I use Spark 1.4.1 but I am running in HDP 2.3.4 -:) ).

Can you tell me please how I can set the number of cores on my machine, this is from Ambari ?

I run my code in a console when I write :

  1. ./sbin/start-history-server.sh

it display:

  1. -bash: cd: ./sbin/start-history-server.sh: No such file or directory

avatar
Master Guru

cd to your Spark 1.4 directory /usr/hdp/current/spark-client possibly. That's probably it.

or it might be here

/usr/hdp/current/spark-historyserver/

run it from the bin there

Your server or PC has as many cores as a CPU has, if it's a PC it might just have 4 or 8 or 16.

you can set cores in spark in SparkConf in your code

from the command line

Note that only values explicitly specified through spark-defaults.conf, SparkConf, or the command line will appear. For all other configuration properties, you can assume the default value is used.

Please see the full page here: https://spark.apache.org/docs/1.4.1/configuration.html#available-properties

I highly recommend reading all of Spark's basic documentation before running Spark applications. They answer a lot of questions.

./bin/spark-submit --name "My app" --master local[4] --conf spark.shuffle.spill=false
  --conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails -XX:+PrintGCTimeStamps" myApp.jar

avatar
Super Collaborator

Are you running the spark job via YARN? Then go to the Resource Manager (RM) UI. It will be running on your RM machine on port 8088. From there find the Applications link that lists all running application. Navigate to the application page for your application. There you will find a link for Application Master which will connect you to the running application master. If the job has finished then the link will be History which will connect you to the Spark History Server and show you the same UI for the completed app.

In an HDP cluster Spark history server is always running if Spark service is installed via Ambari.

After a Spark job is running you cannot manually change its number of executors or memory