Support Questions

Find answers, ask questions, and share your expertise

Spark jobs which were not getting into the RUNNING state from ACCEPTED State

avatar
Explorer

Spark jobs which were not getting into the RUNNING state from ACCEPTED State,this is happening when submiting  jobs with driver-memory greater that 3G.

 

Example:

spark-submit --master yarn --deploy-mode cluster --driver-memory 4G --class org.apache.spark.examples.SparkPi /usr/hdp/3.1.5.0-152/spark2/examples/jars/spark-examples_2.11-2.3.2.3.1.5.0-152.jar 10000

 

Error:

Mon Mar 02 11:33:41 -0500 2020] Application is added to the scheduler and is not yet activated. User's AM resource limit exceeded. Details : AM Partition = <DEFAULT_PARTITION>; AM Resource Request = <memory:6144, vCores:1>; Queue Resource Limit for AM = <memory:71680, vCores:1>; User AM Resource Limit of the queue = <memory:10240, vCores:1>; Queue AM Resource Usage = <memory:6144, vCores:1>;

1 REPLY 1

avatar
Cloudera Employee

Hi,

 

We do understand that the jobs are not getting into Running state from accepted state.Could you please share the entire Yarn logs and Resource manager logs to check for any Errors. Added jobs also will get stuck if there is no much resources in the cluster. This can be checked from the Resource manager WebUI.

 

Thanks

AKR