Created 02-10-2016 04:24 PM
Ive installed Spark 1.6 on my hortonworks cluster (2.3.4.0-3485) by using the following website:
http://hortonworks.com/hadoop-tutorial/apache-spark-1-6-technical-preview-with-hdp-2-3/
When I run spark-shell or pyspark from my command line, the first two lines are these two:
ls: cannot access /usr/hdp/None/hadoop/lib: No such file or directory 16/02/10 17:07:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Anything Im missing in my Spark installation?
Created 02-10-2016 04:27 PM
please install hdfs client on the node you're running spark-shell from. Go to the host in ambari, and install clients and pick hdfs. Also double check with this thread http://stackoverflow.com/questions/19943766/hadoop-unable-to-load-native-hadoop-library-for-your-pla...
Created 02-10-2016 04:27 PM
please install hdfs client on the node you're running spark-shell from. Go to the host in ambari, and install clients and pick hdfs. Also double check with this thread http://stackoverflow.com/questions/19943766/hadoop-unable-to-load-native-hadoop-library-for-your-pla...
Created 02-10-2016 10:07 PM
Thank you for the link. Educative reading 🙂
The clients are installed (HDFS, YARN and Hive are, I believe, required). Ive been doing some researcher and Ive also checked the link you proposed. Adding various variables in my bashrc didnt work.
I read also this document: https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/NativeLibraries.html. If I run hadoop checknative -aI get, among other things, the following:
hadoop: true /usr/hdp/2.3.4.0-3485/hadoop/lib/native/libhadoop.so.1.0.0
snappy: true /usr/hdp/2.3.4.0-3485/hadoop/lib/native/libsnappy.so.1 Seems as if Hadoop sees the native libraries, but Spark does not?