04-15-2018 11:17 PM
@hedy thanks for sharing.
The workaround you received makes sense when you are not using any cluster manager(?)
Local mode ( --master local[i] ) is generally seen if you want to test or debug something quickly since there will be only one JVM launched on the node from where you are running pyspark and this JVM will act as driver, executor, and master -> all-in-one. But of course with local mode, you lose the scalability and resource management that a cluster manager provides. If you want to debug why simultaneous spark shells are not working when using Spark-On-Yarn, we need to diagnose this from YARN perspective (troubleshooting steps shared in the last post). Let us know.
03-08-2019 05:10 AM
I am facing the same issue and can anyone please suggest how to resolve this. On running two spark application , one remains at accepted state while other is running.
What is the configuration that needs to be done for this to be working?
Following is the configuration for dynamic resource pool config: