Created on 09-06-201612:55 PM - edited 08-17-201910:20 AM
Setting up GPU-enabled Tensorflow to work with Zeppelin
Sometimes we want to do some quick Deep Learning prototyping
using TensorFlow. We also want to take advantage of Spark for data
pre-processing, scaling, feature extraction while keeping it all in the same
place for demo. This step-by-step guide will go through process of setting up
those tools to work with each other.
echo -e "blacklist nouveau\nblacklist lbm-nouveau\noptions nouveau modeset=0\nalias nouveau off\nalias lbm-nouveau off\n" | sudo tee /etc/modprobe.d/blacklist-nouveau.conf
echo options nouveau modeset=0 | sudo tee -a /etc/modprobe.d/nouveau-kms.conf
sudo update-initramfs -u
sudo reboot # we do actually need to reboot it
sudo apt-get install -y linux-image-extra-virtual
sudo modprobe nvidia
It would be too easy for NVidia if that was it. Now we need
to get CuDNN from their Accelerated Computing Program. You would need to apply
for it here. (https://developer.nvidia.com/cudnn).
Approval shouldn’t take more than couple of hours.