I have configured Apache Spark standalone cluster into two Ubuntu 14.04 VMs. One of the VMs i.e. Master and the other one i.e. Worker,both are connected with password less ssh described here.
After that from the Master, I have started master as well as worker by the following command from the spark home directory -
Then I run the following command from Master as well as Woker VMs.
It seemed that the Master and Worker is running properly and also in Web UI, there is no error occured. But when I am trying to run an application using the following command-
It gives WARN message in console that-
TaskSchedulerImpl:Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
When I am trying to run the application by creating the worker in the same server where the master exists, the application have executed successfully. The WARN message is giving only when the Master and Worker are separated. Both VMs are memory optimized ec2 instances(r3.xlarge)