Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Spark standalone master HA jobs in WAITING status

Spark standalone master HA jobs in WAITING status

Explorer

We are trying to setup HA on spark standalone master using zookeeper. We have two zookeeper hosts which we are using for spark ha as well.

Configured following thing in spark-env.sh

 

 

SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=zk_server1:2181,zk_server2:2181"

 

Started both the masters.

started shell and status of the job is RUNNING. master1 is in ALIVE and master2 is in STANDBY status. Killed the master1 and master2 has been picked up and all the workers appeared alive in master2.

The shell which is already running has been moved to new master. However, the status is in WAITING status and executors are in LOADING status.

No error in worker log and executor log, except notification that connected to new master.

I could see the worker re-registered, but the executor does not seems to be started. Is there any thing that i am missing.?

My spark version is 1.5.0

1 REPLY 1

Re: Spark standalone master HA jobs in WAITING status

Expert Contributor

Hi Srini,

 

We recommend using Spark on Yarn as Spark Standalone isn't supported and isn't as well tested.  Cloudera has documentation on making the ResourceManager HA which would allow Spark to be HA as well.

 

For your specific question, unfortunately I'm unable to help, but you may want to try a different forum as most users experience in this forum will be with Spark on Yarn.