I have configured & set Hadoop Cluster over 2 nodes and launch it along with Yarn like so :
On the master node :
On the slave node :
And it shows through UI that there has been a connection established between the two machines that form the cluster.
To note that start-dfs on the master node started both namenode and datanode even after setting slaves and hosts files.
Now i submit an application (simple hello world) to Yarn : through this command :
Spark-submit --class "main" --master yarn pathToJar
But i get the error
15/08/29 12:07:58 INFO Client: ApplicationManager is waiting for the ResourceManager
client token: N/A diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
start time: 1440864477580
final status: UNDEFINED user: hdfs 15/08/29 12:07:59 INFO Client: Application report for application_1440861466017_0007 (state: ACCEPTED) 15/08/29 12:08:00 INFO Client: Application report for application_1440861466017_0007 (state: ACCEPTED) 15/08/29 12:08:01 INFO Client: Application report for application_1440861466017_0007 (state: ACCEPTED)...
What am i missing in my configuration ?
I think you don't have sufficient resources to run the job for queue root.hdfs. Verify is there any pending running jobs/application in the root.hdfs queue from Resource Manager UI. If it is running kill those if it is not required. And also verify from spark side you have given less resource to test it.