Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Spark application stuck at ACCEPTED state at spark-submit

avatar

Hello,

 

I have configured & set Hadoop Cluster over 2 nodes and launch it along with Yarn like so : 

 

On the master node :

  • hdfs namenode -regular
  • yarn resourcemanager

 

On the slave node : 

  • hdfs datanode -regular
  • yarn nodemanager

And it shows through UI that there has been a connection established between the two machines that form the cluster.

To note that start-dfs on the master node started both namenode and datanode even after setting slaves and hosts files.

Now i submit an application (simple hello world) to Yarn : through this command :

Spark-submit --class "main" --master yarn pathToJar

 

But i get the error 

15/08/29 12:07:58 INFO Client: ApplicationManager is waiting for the ResourceManager 

client token: N/A diagnostics: N/A

ApplicationMaster host: N/A

ApplicationMaster RPC port: -1

queue: root.hdfs

start time: 1440864477580

final status: UNDEFINED  user: hdfs 15/08/29 12:07:59 INFO Client: Application report for application_1440861466017_0007 (state: ACCEPTED) 15/08/29 12:08:00 INFO Client: Application report for application_1440861466017_0007 (state: ACCEPTED) 15/08/29 12:08:01 INFO Client: Application report for application_1440861466017_0007 (state: ACCEPTED)...

 

What am i missing in my configuration ?

 

 

3 REPLIES 3

avatar
Super Collaborator

Verify the submission queue for application_1440861466017_0007 and ensure it has sufficient resources to launch the application.

avatar
Master Collaborator

Can you share the screenshot from the Yarn Web UI and the resources you are passing with the spark submit command?

avatar
Master Collaborator

I think you don't have sufficient resources to run the job for queue root.hdfs. Verify is there any pending running jobs/application in the root.hdfs queue from Resource Manager UI. If it is running kill those if it is not required. And also verify from spark side you have given less resource to test it.