Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Problem with Spark cluster

Highlighted

Problem with Spark cluster

Contributor

I am trying to run a spark application that runs fine in local mode.

I am running like this:

/usr/hdp/2.6.2.0-205/spark2/bin/spark-submit --class MyMain \
--master yarn \
--deploy-mode cluster \
--executor-memory 2G \
--num-executors 10 \
framework-1.0.0-0-all.jar

But it takes forever to start and in hadoop application UI I see this status:

YarnApplicationState:ACCEPTED: waiting for AM container to be allocated, launched and register with RM

In the console I see this line every second for over 10 minutes:

17/10/24 15:57:30 INFO Client: Application report for application_1508848914801_0003 (state: ACCEPTED)

Any ideas?

8 REPLIES 8
Highlighted

Re: Problem with Spark cluster

Cloudera Employee

Check if all the required ports are given to the job in configuration and ports are open (eg: spark.driver.port=, etc...).

Highlighted

Re: Problem with Spark cluster

@Yair Ogen,

This is probably because of a resource crunch. Try adding node managers and see if it goes to running state.

Thanks,

Aditya

Highlighted

Re: Problem with Spark cluster

Contributor

@Aditya Sirna

That didn't help. I now have 3 node managers. still it seems stuck in Accepted state. Note: under: http:/my-node:8042/node/allApplications I do see the container running and logs show the application IS running. So - even more strange that the app seems to stuck in Accepted mode for so long...

Highlighted

Re: Problem with Spark cluster

@Yair Ogen,

Can you please check the resource manager UI to see the actual reason. Login to RM UI -> Applications -> Accepted -> Click on app and check for the diagnostics field. Please paste the output here

Highlighted

Re: Problem with Spark cluster

Contributor
Diagnostics:AM container is launched, waiting for AM container to Register with RM
Highlighted

Re: Problem with Spark cluster

Highlighted

Re: Problem with Spark cluster

Contributor

@Aditya Sirna

I suspect my issue is different. I have 3 healthyyarn-cluster.jpg nodes. see attached.

Highlighted

Re: Problem with Spark cluster

Explorer

@Yair Ogen: What is your yarn.scheduler.capacity.maximum-am-resource-percent configuration

Don't have an account?
Coming from Hortonworks? Activate your account here