Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2092 | 06-15-2020 05:23 AM | |
| 17432 | 01-30-2020 08:04 PM | |
| 2254 | 07-07-2019 09:06 PM | |
| 8711 | 01-27-2018 10:17 PM | |
| 4912 | 12-31-2017 10:12 PM |
09-05-2018
01:10 PM
we have the following Advanced spark2-log4j-propertiese , from ambari spark2 --> config --> # Set everything to be logged to the console
log4j.rootCategory=WARN, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n
# Settings to quiet third party logs that are too verbose
log4j.logger.org.eclipse.jetty=WARN
log4j.logger.org.eclipse.jetty.util.component.AbstractLifeCycle=ERROR
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO
log4j.logger.org.apache.spark.metrics.MetricsConfig=DEBUG
log4j.logger.org.apache.spark.deploy.yarn.Client=DEBUG how to change the current log4 for debug mode ?
... View more
Labels:
- Labels:
-
Apache Spark
09-05-2018
12:36 PM
this is example what we have on the datanode machine , you can see that used are around 47-61% , and each disk is 20G size
... View more
09-05-2018
12:28 PM
@Jay , can you refer to Karthik Palanisamy quastion ?
... View more
09-05-2018
12:25 PM
yes dfs.datanode.data.dir are - /data/sdb,/data/sdc,/data/sdd,/data/sde , and hdfs-site.xml is also with all right configuration
... View more
09-05-2018
12:11 PM
yes we know that , as you can see from all thread , the concoction is to add disks to each datanode , so no other solution exept this
... View more
09-05-2018
10:46 AM
remaining is only 18G
... View more
09-05-2018
08:06 AM
@Jay . please let me know if I understand it as the following let say that one of the replica spark2-hdp-yarn-archive.tar.gz , is corrupted when I run this CLI su - hdfs -c "hdfs fsck /hdp/apps/2.6.4.0-91/spark2/spark2-hdp-yarn-archive.tar.gz" dose its actually means that fsck will replace the bad one with the good one and status finally will be HEALTHY ?
... View more
09-05-2018
05:53 AM
@Jay , just to be sure , when you said "increase the DFS capacity" , you actually mean to add disks / add capacity by giving dfs.datanode.data.dir more mount points , am I right
... View more
09-05-2018
05:33 AM
@Jay in spite
this is diff case , I post yesterday the thred - https://community.hortonworks.com/questions/217423/spark-application-communicating-with-driver-in-hea.html , can you help me with this ?
... View more
09-05-2018
05:29 AM
@Jay , very nice solution until now I was doing this , in ordeer to verify the file gzip -t /var/tmp/spark2-hdp-yarn-archive.tar.gz gunzip -c /var/tmp/spark2-hdp-yarn-archive.tar.gz | tar t > /dev/null tar tzvf spark2-hdp-yarn-archive.tar.gz > /dev/null
... View more