Support Questions

Find answers, ask questions, and share your expertise

How to override default log4j properties in yarn cluster mode

avatar
 
3 REPLIES 3

avatar
Master Mentor

@Balakumar Balasundaram

Login to Ambari then navigate to :

Yarn --> Configs --> Advanced --> Advanced yarn-log4j

And then make your desired changes there. Is that what you are looking out for?

Also please take a look at the "Advanced yarn-env" (yarn-env template) where you will see that how Ambari uses the "YARN_ROOT_LOGGER" property to define the

YARN_OPTS="$YARN_OPTS -Dhadoop.root.logger=${YARN_ROOT_LOGGER...........

.

avatar

@Balakumar Balasundaram

Since you have tagged Spark , assuming that you are running a spark app on yarn-cluster mode.

Create a log4j.properties file and set the log level there and use below command.

spark-submit --files file:///home/spark/log4j.properties --driver-java-options "-Dlog4j.configuration=./log4j.properties" --conf "spark.executor.extraJavaOptions=-Dlog4j.configuration=./log4j.properties" --class org.apache.spark.examples.SparkPi --master yarn-cluster --num-executors 3 --driver-memory 512m --executor-memory 512m --executor-cores 1 /usr/hdp/current/spark-client/lib/spark-examples-1.6.2.2.5.3.0-37-hadoop2.7.3.2.5.3.0-37.jar 10

If not Spark, Please use the solution provided by @Jay SenSharma

avatar

You could also do in the Spark code:

import org.apache.log4j.{Level, Logger}

def main(args: Array[String]) = {
Logger.getRootLogger.setLevel(Level.ERROR)

var conf = new SparkConf().setAppName("KafkaToHdfs")
val sc = new SparkContext(conf)