Support Questions
Find answers, ask questions, and share your expertise

pyspark ImportError: No module named numpy

Explorer

File "/opt/mapr/spark/spark-1.6.1/python/lib/pyspark.zip/pyspark/mllib/__init__.py", line 25, in <module>

ImportError: No module named numpy

8 REPLIES 8

numpy is missing here,install numpy using pip install numpy

Explorer

I have already installed numpy and using python console its working fine. I tried to setup PYthon environment variable in spark-env.sh. but did not work.

@Bhupendra Mishra

Are you running it on spark local, standalone or YARN mode?

do you have multiple python version of python installed on your machine or your working with python testenv. what is your PYTHONPATH?

Explorer

nope I have only one python 2.7.5 and

whereis python

python: /usr/bin/python /usr/bin/python2.7 /usr/bin/python2.7-config /usr/lib/python2.7 /usr/lib64/python2.7 /etc/python /usr/include/python2.7 /usr/share/man/man1/python.1.gz

Explorer

I am facing the same problem. I have installed numpy on all the nodes. And I am running it using YARN. In the directory /usr/bin I see python, python2, and python2.7. But only python2.7 is green in the list. echo $PYTHONPATH gave me empty string. Afterwards, I executed export PYTHONPATH=/usr/bin/python2.7 on each node. But still the my job submission exits with 'No module named numpy'. Any help?

Explorer

please check the permission of python installation directories and see your current user is having correct permission or not.

Also try to simulate scenarios using root user. I hope using root user it should work.

Explorer

as @Bhupendra Mishra indirectly pointed out, ensure to launch pip install numpy command from a root account (sudo does not suffice) after forcing umask to 022 (umask 022) so it cascades the rights to Spark (or Zeppelin) User

Also, You have to be aware that you need to have numpy installed on each and every worker, and even the master itself (depending on your component placement)