Support Questions

Find answers, ask questions, and share your expertise

Spark job submit log messages on console

New Contributor

When I submit a spark job using below command,

spark-submit --num-executors 10 --executor-cores 5 --executor-memory 2G --master yarn-cluster --conf spark.driver.userClassPathFirst=true --conf spark.executor.userClassPathFirst=true --class com.example.SparkJob target/scala-2.10/spark-poc-assembly-0.1.jar hdfs:///user/aahmed/example.csv

It gives me these messages on console. I want to see org.apache.spark INFO level message. How and where can I configure this?

16/04/08 15:09:50 INFO Client: Application report for application_1460098549233_0013 (state: RUNNING)

16/04/08 15:09:51 INFO Client: Application report for application_1460098549233_0013 (state: RUNNING)

16/04/08 15:09:52 INFO Client: Application report for application_1460098549233_0013 (state: RUNNING)

16/04/08 15:09:53 INFO Client: Application report for application_1460098549233_0013 (state: RUNNING)

16/04/08 15:09:54 INFO Client: Application report for application_1460098549233_0013 (state: RUNNING)



To configure log levels, add

--conf "" 
--conf "" 

This assumes you have a file called on the classpath (usually in resources for the project you're using to build the jar. This log4j can then control the verbosity of spark's logging.

I usually use something derived from the spark default, with some customisation like:

# Set everything to be logged to the console
log4j.rootCategory=WARN, console
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n

# Settings to quiet third party logs that are too verbose$exprTyper=INFO$SparkILoopInterpreter=INFO

# SPARK-9183: Settings to avoid annoying messages when looking up nonexistent UDFs in SparkSQL with Hive support

# Logging for this application

Something else to note here is that in yarn cluster mode, all your important logs (especially the executor logs) will be aggregated by the YARN ATS when the application finishes. You can get these with

yarn logs -applicationId <application>

This will show you all the log based on your config levels.

You can also switch to yarn-client mode to see more logs printed directly onto the console. Remember to switch back to yarn-cluster mode after you are done debugging.

Rising Star

Hi Simmon,

Thanks for the tip. Anyway, I have an additional question : I guess this configuration works in "yarn-cluster" mode (when driver and executors run under yarn responsibility on the cluster nodes) ?

My problem comes from the fact that I perform my spark-submit in "yarn-client" mode, which means that my driver is not managed by yarn, and the consequence is that the logs from the driver application go to the console from the server where I performed my "spark-submit" command. As this is a long-run job (several days), I would like to redirect the driver's logs to dedicated file thanks to log4j configuration, but couldn't succeed with this configuration...? Any idea how to achieve this ?

Thanks again


@Sebastian Carroll These options will work in both yarn-client and yarn-cluster mode. What you will need to do is ensure you have an appropriate file appender in the log4j configuration.

That said, if you have a job which is running for multiple days, you are far far better off using yarn-cluster mode to ensure the driver is safely located on the cluster, rather than relying on a single node with a yarn-client hooked to it.

Rising Star

Well,couldn't make it work...

I tried out several options :

I successfully got drivers logs in dedicated log file when using following option with my "spark-submit" command line : --driver-java-options "-Dlog4j.configuration=file:///local/home/.../"

Couldn't obtain the same with your suggestion : --conf "spark.driver.extraJavaOptions=...

For executors' logs, I gave a try with your suggestion as well : --conf "spark.executor.extraJavaOptions=... but failed to notice any change to logging mechanism. I guess this is a classpath issue, but couldn't find any relevant example in the documentation 😞

If I use --conf "", where should I put this file ? in the "root" folder of the fat jar that I pass to spark-submit command ? somewhere else ?

Note that I also tried with :

--conf "spark.executor.extraJavaOptions=-Dlog4j.configuration=file:///local/home/.../" to point to an external file (not in the jar file) but it failed too...

Any idea about something wrong in my configuration ?

Thanks for your help

Rising Star

Thanks a lot, I will give a new try to this configuration with yarn-client mode.

To answer to your question : yes I would really prefer using yarn-cluster mode for my job, but I couldn't make it work at the moment (to summarize : my job requires to access HBase for R/W operations, and my cluster is secured thanks to kerberos. When I launch my job with yarn-cluster mode, I face an authentication problem when reaching HBase. I found a workaround in yarn-client mode but no idea how to solve this issue at the moment in yarn-cluster mode)