Support Questions
Find answers, ask questions, and share your expertise
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Spark: Job success but error when connecting to masters UI

Spark: Job success but error when connecting to masters UI

I have a fresh instalation of chd5.5, following the steps and everything is working. I have recompiled my spark (in scala) jobs and when submiting when sparl-submit either in client or cluster mode I am receiving a bunch of errors at the beginnig and a the end of the job (I have masked the IPs):


15/12/04 15:19:32 ERROR ErrorMonitor: AssociationError [akka.tcp://sparkDriver@{{master.node.ip}}:53543] <- [akka.tcp://sparkExecutor@{{workerX.node.ip}}:56137]: Error [Shut down address: akka.tcp://sparkExecutor@{{workerX.node.ip}}:56137] [
akka.remote.ShutDownAssociation: Shut down address: akka.tcp://sparkExecutor@{{workerX.node.ip}}:56137
Caused by: akka.remote.transport.Transport$InvalidAssociationException: The remote system terminated the association because it is shutting down.


However the job is completing succsesfully and the time is more or less reasonable. This did not happen in my previous instalations.


In additio I have noticed that the master's UI of the spark master node is unreachable... Something that did not happen in my previous cdh deploy.


Is anyone having the same issues?


I'm running cdh5.5 in CentOS 7.1 with disabled firewalls (iptables etc..)




Re: Spark: Job success but error when connecting to master

Master Collaborator
Yes I've seen those errors sometimes in Spark 1.5; they seem to be
harmless, just annoying.

You'd have to give more detail about a master not being reachable.
I've not seen any problems with the UI, no.