For some reason, either sqoop job container gets killed with error 137 or attempts to execute the sqoop job again at times. It does not seem to happen if I stop node manager role on the master server.
Can someone please explain what happens if I keep this role stopped? My master server is aws m4*4 so I believe my cluster will have 16 cores less available to it now. I have tried looking through node manager logs and did not find any errors or problems.
My sqoop job usually restarted with this message:
This was a problem since my warehouse dir used in the command was not deleted before re-attempting and it stopped with the message output dir already exists