Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1944 | 06-15-2020 05:23 AM | |
| 15788 | 01-30-2020 08:04 PM | |
| 2092 | 07-07-2019 09:06 PM | |
| 8164 | 01-27-2018 10:17 PM | |
| 4639 | 12-31-2017 10:12 PM |
09-05-2018
03:02 PM
we configured 4 disk! , it is not the first time that we configured , and this is the same on all our lab cluster , please look on this , you can see clearly 4 disks !
... View more
09-05-2018
02:38 PM
the last one is : http://<active namenode host>:50070/dfshealth.html#tab-datanode-volume-failures
... View more
09-05-2018
02:35 PM
this is what we get from http://xxx.xxx.xxx.xxx:50070/dfshealth.html#tab-datanode
... View more
09-05-2018
02:31 PM
this is what we get from "http://<active namenode host>:50070/dfshealth.html#tab-overview"
... View more
09-05-2018
02:19 PM
Logging Levels The valid logging levels are log4j’s Levels (from most specific to least):
OFF (most specific, no logging) FATAL (most specific, little data) ERROR WARN INFO DEBUG TRACE (least specific, a lot of data) ALL (least specific, all data)
... View more
09-05-2018
02:18 PM
for example I change all to ALL ( latest ) , to get the most details in spark logs and then restart the spark , but I not see that logs gives more data # Set everything to be logged to the console
log4j.rootCategory=ALL, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n
# Settings to quiet third party logs that are too verbose
log4j.logger.org.eclipse.jetty=ALL
log4j.logger.org.eclipse.jetty.util.component.AbstractLifeCycle=ALL
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=ALL
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=ALL
log4j.logger.org.apache.spark.metrics.MetricsConfig=ALL
log4j.logger.org.apache.spark.deploy.yarn.Client=ALL
... View more
09-05-2018
01:10 PM
we have the following Advanced spark2-log4j-propertiese , from ambari spark2 --> config --> # Set everything to be logged to the console
log4j.rootCategory=WARN, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n
# Settings to quiet third party logs that are too verbose
log4j.logger.org.eclipse.jetty=WARN
log4j.logger.org.eclipse.jetty.util.component.AbstractLifeCycle=ERROR
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO
log4j.logger.org.apache.spark.metrics.MetricsConfig=DEBUG
log4j.logger.org.apache.spark.deploy.yarn.Client=DEBUG how to change the current log4 for debug mode ?
... View more
Labels:
- Labels:
-
Apache Spark
09-05-2018
12:36 PM
this is example what we have on the datanode machine , you can see that used are around 47-61% , and each disk is 20G size
... View more
09-05-2018
12:28 PM
@Jay , can you refer to Karthik Palanisamy quastion ?
... View more
09-05-2018
12:25 PM
yes dfs.datanode.data.dir are - /data/sdb,/data/sdc,/data/sdd,/data/sde , and hdfs-site.xml is also with all right configuration
... View more