Support Questions
Find answers, ask questions, and share your expertise

resource management (I.E : hbase connections) in spark executers


if I run a spark submit with X executors and I perform some action like map on an rdd the action will be done in parallel by as much executors as possible. what I'm not sure about is how does the connection management is done.
if I perform:>classTable.put(toPut(y)))

where toPut converts it to a put and classTable is org.apache.hadoop.hbase.client.Table
if classTable was defined in the initiator of the class how does every executor gets the element . does every executor create a classTable it self or does it get copied or does the put action perform only in a single executor. I dont really understand this so if anyone can explain or point in the right direction Ill be gratefull