Actually no. I didn't do any config changes. Yarn pool allocation is same as per above fair-scheduler.xml
"The spark job submitted to root.qatest is running actually (State is RUNNING according to your screenshot)." => It shows running but it will always waiting for the task container. AM container get assigned to the job but task container never get assigned. If i see pool usage, containers will be pending.
"It may also be helpful to look at the spark job log to see if there is any useful information there." => Same job is running fine on default queue.
I see no pattern after long monitoring but it never happen with "default" pool and if number of pools are more(i tried with 3-4) then it happens frequently.
Cant see anything wrong with the logs too. I am kind of running out of ideas.