Support Questions

Find answers, ask questions, and share your expertise

oozie spark2 action failing over rm

avatar
I am trying to run oozie spark action but it is getting failed , i checked logs for the same , it is showing below error :

2018-03-16 07:38:10,319 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id 2018-03-16 07:38:10,771 WARN [main] org.apache.hadoop.ipc.Client: Failed to connect to server: rm1/oozie_server:8032: retries get failed due to exceeded maximum allowed retries number: 0 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206

2018-03-1607:38:10,780 INFO [main] org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider:Failing over to rm2

I have some queries :

1. I have mentioned in value of jobtracker= rm1:8050 with this I was able to submit job earlier , is it necessary if now rm2 is in active state and rm1 is in standby then there is need to change value of jobtracker to rm2 ,

how to check what port rm2 is connecting .

please help on this ,

I also run spark1 job jobtracker address as rm1 that job run successfully but spark2 is creating above problems .

2 REPLIES 2

avatar
Super Collaborator

Hi @Anurag Mishra You can use value of 'yarn.resourcemanager.cluster-id' in jobTracker.

# grep -A1 'yarn.resourcemanager.cluster-id' /etc/hadoop/conf/*
jobTracker=yarn-cluster

However, "Failing over to rm2" is just a "INFO" message , that indicates rm1 is Standby.

Your issue with oozie spark2 action would be different.

avatar
Super Collaborator

Hi @Anurag Mishra please accept the answer if it resolved your issue.