Hi @Sambavi ,
You can install any required dependencies on all nodes and use them but you need to keep in mind that Pandas and Numpy doesn't provide distributed computing option and it wouldn't work with big data sets.
If your zeppelin configured to use yarn cluster mode It will take all data to spark driver in data node where spark driver located and try to process it there. (if its not big data set you can increase driver resources and it will work but its not looks like solution)
if you use client mode it will take everything in zeppelin node.
I recommend to try HandySpark https://github.com/dvgodoy/handyspark