Member since
09-07-2017
40
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
9148 | 12-08-2017 06:41 AM |
12-07-2022
12:47 PM
@hanumanth Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks
... View more
09-02-2021
10:17 PM
@kokku Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
08-30-2021
02:55 PM
You are requesting how to get the "Per job" memory and cpu counters. Please see the recent response in: https://community.cloudera.com/t5/Support-Questions/How-to-get-the-YARN-jobs-metadata-directly-not-using-API/m-p/322711/highlight/false#M228910 In the metadata (counter) output, you will see the vcore-milliseconds and vcore-millseconds value for all map and reduce tasks, Task Summary, Analysis, File System Counters for the job and other info about the specific job.
... View more
12-08-2017
06:41 AM
1 Kudo
This is the order of precedence for configurations that Spark will use:
- Properties set on SparkConf or SparkContext in code
- Arguments passed to spark-submit, spark-shell, or pyspark at run time
- Properties set in /etc/spark/conf/spark-defaults.conf, a specified properties file or in Cloudera Manager safety valve
- Environment variables exported or set in scripts
* For properties that apply to all jobs, use spark-defaults.conf, for properties that are constant and specific to a single or a few applications use SparkConf or --properties-file, for properties that change between runs use command line arguments.
... View more