Support Questions

Find answers, ask questions, and share your expertise

How to override default log4j properties in yarn cluster mode


Super Mentor

@Balakumar Balasundaram

Login to Ambari then navigate to :

Yarn --> Configs --> Advanced --> Advanced yarn-log4j

And then make your desired changes there. Is that what you are looking out for?

Also please take a look at the "Advanced yarn-env" (yarn-env template) where you will see that how Ambari uses the "YARN_ROOT_LOGGER" property to define the

YARN_OPTS="$YARN_OPTS -Dhadoop.root.logger=${YARN_ROOT_LOGGER...........


@Balakumar Balasundaram

Since you have tagged Spark , assuming that you are running a spark app on yarn-cluster mode.

Create a file and set the log level there and use below command.

spark-submit --files file:///home/spark/ --driver-java-options "-Dlog4j.configuration=./" --conf "spark.executor.extraJavaOptions=-Dlog4j.configuration=./" --class org.apache.spark.examples.SparkPi --master yarn-cluster --num-executors 3 --driver-memory 512m --executor-memory 512m --executor-cores 1 /usr/hdp/current/spark-client/lib/spark-examples- 10

If not Spark, Please use the solution provided by @Jay SenSharma

You could also do in the Spark code:

import org.apache.log4j.{Level, Logger}

def main(args: Array[String]) = {

var conf = new SparkConf().setAppName("KafkaToHdfs")
val sc = new SparkContext(conf)
Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.