Created 10-22-2021 12:33 PM
Hello,
I have configured & set Hadoop Cluster over 2 nodes and launch it along with Yarn like so :
On the master node :
On the slave node :
And it shows through UI that there has been a connection established between the two machines that form the cluster.
To note that start-dfs on the master node started both namenode and datanode even after setting slaves and hosts files.
Now i submit an application (simple hello world) to Yarn : through this command :
Spark-submit --class "main" --master yarn pathToJar
But i get the error
15/08/29 12:07:58 INFO Client: ApplicationManager is waiting for the ResourceManager
client token: N/A diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: root.hdfs
start time: 1440864477580
final status: UNDEFINED user: hdfs 15/08/29 12:07:59 INFO Client: Application report for application_1440861466017_0007 (state: ACCEPTED) 15/08/29 12:08:00 INFO Client: Application report for application_1440861466017_0007 (state: ACCEPTED) 15/08/29 12:08:01 INFO Client: Application report for application_1440861466017_0007 (state: ACCEPTED)...
What am i missing in my configuration ?
Created 10-13-2023 04:35 AM
Verify the submission queue for application_1440861466017_0007 and ensure it has sufficient resources to launch the application.
Created 10-19-2023 04:34 PM
Can you share the screenshot from the Yarn Web UI and the resources you are passing with the spark submit command?
Created 10-24-2023 09:36 PM
I think you don't have sufficient resources to run the job for queue root.hdfs. Verify is there any pending running jobs/application in the root.hdfs queue from Resource Manager UI. If it is running kill those if it is not required. And also verify from spark side you have given less resource to test it.