Spark job submitted successfully and i can track the application driver and worker/executor nodes.
Everything works fine but only concern if kafka borkers are offline or restarted my application controlled by yarn should not shutdown? but it does.
If this is expected behavior then how to handle such situation with least maintenance? Keeping in mind Kafka cluster is not in hadoop cluster and managed by different team that is why requires our application to be resilient enough.
If the brokers are entirely unavailable, I believe it would fail the job, yes. The job can't read any data anyway, so it can't proceed anyway. I think you'd just have to restart the job if your cluster were offline completely (or try to negotiate more reliability from the other team if that's really the problem?)