Yes, the map reduces job could be submitted from the slave nodes. Jobs could be run from any machine in the cluster as long as each of the nodes has the proper job tracker location property configured. So Hadoop has to be configured with proper job tracker and the name node
Mapred.job.tracker should be configured on the slave node to the master’s host and port and the connection has to be established between the master and slave node for ex by running telnet.master.com.8021.
@Harshali Patel Just to add to @Dukool SHarma correct answer. If the slave is managed by ambari then all client configurations should be automatically in sync. But if the slave is not managed by ambari you should make sure all this configuration for hdfs/yarn/mapreduce/etc is in sync with the cluster manually.
Yes, as long as the appropriate clients are installed on the slave node, if you also have the /etc/config populated with the correct details for connecting to your instance, then no connection parameters need to be specified for the clients (this is automatically populated if the slave node is deployed/configured by ambari). in that case you submit the job exactly as you would on any other node