Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

What happens when I stop Node manager role on master server

Highlighted

What happens when I stop Node manager role on master server

Expert Contributor

For some reason, either sqoop job container gets killed with error 137 or attempts to execute the sqoop job again at times. It does not seem to happen if I stop node manager role on the master server.

Can someone please explain what happens if I keep this role stopped? My master server is aws m4*4 so I believe my cluster will have 16 cores less available to it now. I have tried looking through node manager logs and did not find any errors or problems.

My sqoop job usually restarted with this message:

This was a problem since my warehouse dir used in the command was not deleted before re-attempting and it stopped with the message output dir already exists

  1. Found[1]Map-Reduce jobs fromthis launcher
  2. Killing existing jobs and starting over: