- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
WARN cluster.YarnScheduler: Initial job has not accepted any resources
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I'm using Cloudera manager 7.4.4 and running the spark(2.4.4) application. I'm facing below warning and application is going on to infinite loop.
WARN cluster.YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
I'm using a simple spark-submit command. spark-submit <filename.py>
my python version is 2.7
When I'm running the below command I got many other errors
spark-submit --master yarn --deploy-mode cluster --driver-memory 5g --executor-memory 5g --num-executors 3 --executor-cores 2 <filename.py>
Below are the errors
Created ‎12-29-2022 04:03 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
you should check the container logs for the one mentioned in the snapshots above for more details. Generally caused due to resource crunch. You can consider increasing the below yarn configs as well.
yarn.nodemanager.resource.memory-mb
yarn.nodemanager.resource.cpu-vcores
Hope this helps,
Paras
Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.
Created ‎01-02-2023 06:02 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @SantoshB
You can see such messages when you have reached user factor limit/resource limit on queue level.
You can check tune user-limit factor or check queue utilization to schedule applications accordingly.
Also seems application has failed with exit-code 13, can you please share YARN trace to identify reason for failure?
