Member since
10-24-2018
4
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1499 | 11-05-2018 01:39 PM |
01-29-2019
02:49 PM
Edit File spark-defaults.conf as
follows. spark.history.fs.cleaner.enabled true spark.history.fs.cleaner.maxAge 12h
[ Job history files older than this will be deleted when
the file system history cleaner runs.] spark.history.fs.cleaner.interval 1h
[This dictates how often the file system job history cleaner checks
for files to delete.] Restart spark history server. Setting these values during application run that is
spark-submit via --conf has no effect. Either set them at cluster creation time
via the EMR configuration API or manually edit the spark-defaults.conf, set
these values and restart the spark history server. Also note that the logs will
be cleaned up the next time your Spark app restarts. For example , if you have
a long running Spark streaming job, it will not delete any logs for that application
run and will keep accumulating logs. And when the next time the job
restarts it will cleanup the older logs. We
can also set spark.eventLog.enabled to false.
... View more
01-29-2019
02:49 PM
Go to ResourceManager UI on Ambari. Click
nodes link on the left side of the window. It should show all Node
Managers and the reason for it being listed as unhealthy. Mostly found reasons are regarding disk space threshold
reached. In that case needs to consider following parameters
Parameters
Default value
Description
yarn.nodemanager.disk-health-checker.min-healthy-disks
0.25
The minimum fraction of number of disks to be healthy for the
node manager to launch new containers. This correspond to both
yarn.nodemanager.local-dirs and yarn.nodemanager.log-dirs. i.e. If there are
less number of healthy local-dirs (or log-dirs) available, then new
containers will not be launched on this node.
yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage
90.0
The maximum percentage of disk space utilization allowed after
which a disk is marked as bad. Values can range from 0.0 to 100.0. If the
value is greater than or equal to 100, the nodemanager will check for full
disk. This applies to yarn.nodemanager.local-dirs and
yarn.nodemanager.log-dirs.
yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb
0
The minimum space that must be available on a disk for it to
be used. This applies to yarn.nodemanager.local-dirs and
yarn.nodemanager.log-dirs.
In the final step, if above steps do not reveal the actual
problem , needs to check log , location path : /var/log/hadoop-yarn/yarn.
... View more
11-05-2018
01:39 PM
We are able to resolve the issue by applying below
configurations on the Oozie action in xml. <name>mapreduce.job.user.classpath.first</name> <value>true</value> </property> <property> <name>mapreduce.task.classpath.user.precedence</name> <value>true</value> </property> <property> <name>oozie.launcher.mapreduce.task.classpath.user.precedence</name> <value>true</value> </property> <property> <name>oozie.launcher.mapreduce.job.user.classpath.first</name> <value>true</value> </property> To add,
we got the reference of these properties from a related previous post. See
below link. https://community.hortonworks.com/questions/114525/oozie-overrides-dependencies-with-shared-libsprobl.html
... View more
10-24-2018
03:45 PM
Hi, We have an application that requires Jackson-databind
library with 2.4.4 or later. And, the application is hosted onto a HDP 2.6.5
Centos AWS 3 node cluster ( with Ambari and Oozie setup). During the job run we
got below error: On investigation we found that HDP setup has below 2.2.3
version. ../2.6.5.0-292/hadoop/lib/jackson-databind-2.2.3.jar ../2.6.5.0-292/hadoop/client/jackson-databind-2.2.3.jar ../2.6.5.0-292/hadoop/client/jackson-databind.jar ../2.6.5.0-292/hadoop-hdfs/lib/jackson-databind-2.2.3.jar ../2.6.5.0-292/hadoop-httpfs/webapps/webhdfs/WEB-INF/lib/jackson-databind-2.2.3.jar ../2.6.5.0-292/hadoop-yarn/lib/jackson-databind-2.2.3.jar We had tried passing new jar as lib to the job configuration,
and as per attached yarn logs, it is well shown but still MR action execution
fails. Can you please suggest, how to upgrade to 2.4.x ? Or any possible work around to this issue. Thanks.
... View more
Labels: