currently i have spark over yarn configured and working (on 9 servers approx 200 CORES)
i would like to configure them in standalone mode ( to prevent the "waist of time of allocation containers" )
is it possible ?
Created 08-20-2019 03:49 AM
anyone?
Created on 08-27-2019 04:10 AM - edited 08-27-2019 06:32 AM
One more time, someone one can help?
Created 09-11-2019 03:01 AM
bump
Created 09-11-2019 01:45 PM
You can set your deployment mode in configuration files or from the command line when submitting a job. Use one of the following options to set the deployment mode:
Standalone Cluster Mode
As the name suggests, its a standalone cluster with only spark specific components. It doesn’t have any dependencies on Hadoop components and Spark driver acts as the cluster manager.
To launch a Spark application in cluster mode:
$ ./bin/spark-submit --class path.to.your.Class --master yarn --deploy-mode cluster [options] <app jar> [app options]
Single node
To launch a Spark application in client mode, do the same, but replace cluster with the client. The following shows how you can run spark-shell in client mode:
$ ./bin/spark-shell --master yarn --deploy-mode client
Hope that helps