<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: spark on yarn: java.lang.UnsatisfiedLinkError: ... NativeCodeLoader.buildSupportsSnappy() in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/spark-on-yarn-java-lang-UnsatisfiedLinkError/m-p/27036#M4117</link>
    <description>&lt;P&gt;I saw Hadoop load native libraries in the running user's home onto the classpath.&amp;nbsp; Maybe the same thing is happening to you with Spark.&amp;nbsp; Check your home for&lt;/P&gt;&lt;PRE&gt;ls ~/lib*
libhadoop.a       libhadoop.so        libhadooputils.a  libsnappy.so    libsnappy.so.1.1.3
libhadooppipes.a  libhadoop.so.1.0.0  libhdfs.a         libsnappy.so.1&lt;/PRE&gt;&lt;P&gt;and delete them if found.&amp;nbsp; I could be totally off, but this was the culprit in our case.&lt;/P&gt;</description>
    <pubDate>Tue, 05 May 2015 20:05:20 GMT</pubDate>
    <dc:creator>kwitt.ebay</dc:creator>
    <dc:date>2015-05-05T20:05:20Z</dc:date>
    <item>
      <title>spark on yarn: java.lang.UnsatisfiedLinkError: ... NativeCodeLoader.buildSupportsSnappy()</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/spark-on-yarn-java-lang-UnsatisfiedLinkError/m-p/22724#M4113</link>
      <description>&lt;P&gt;Hi all,&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I am running a CDH5.2 cluster including Spark on YARN. &amp;nbsp;When I run jobs through spark-shell with a local driver I am able to read and process Snappy compressed files, however as soon as I try to run the same scripts (wordcount for testing purposes) on YARN I get an UnsatisfiedLinkError (see below):&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
        org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native Method)
        org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:63)
        org.apache.hadoop.io.compress.SnappyCodec.getDecompressorType(SnappyCodec.java:190)
        org.apache.hadoop.io.compress.CodecPool.getDecompressor(CodecPool.java:176)
        org.apache.hadoop.mapred.LineRecordReader.&amp;lt;init&amp;gt;(LineRecordReader.java:110)
        org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
        org.apache.spark.rdd.HadoopRDD$$anon$1.&amp;lt;init&amp;gt;(HadoopRDD.scala:198)
        org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:189)
        org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:98)
        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
        org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
        org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
        org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
        org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
        org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
        org.apache.spark.scheduler.Task.run(Task.scala:54)
        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:180)
        java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        java.lang.Thread.run(Thread.java:745)&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;I have tried to set the library path to libsnappy.so.1 with a plethora of variables including LD_LIBRARY_PATH, JAVA_LIBRARY_PATH, SPARK_LIBRARY_PATH in spark-env.sh, and hadoop-env.sh, as well as&amp;nbsp;&lt;SPAN&gt;spark.executor.extraLibraryPath,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;spark.executor.extraClassPath in spark-defaults.conf.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;I am at a loss as to what could be causing this problem since running locally works perfectly.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Any pointers/ideas would be really helpful.&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 09:15:30 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/spark-on-yarn-java-lang-UnsatisfiedLinkError/m-p/22724#M4113</guid>
      <dc:creator>rdh</dc:creator>
      <dc:date>2022-09-16T09:15:30Z</dc:date>
    </item>
    <item>
      <title>Re: spark on yarn: java.lang.UnsatisfiedLinkError: ... NativeCodeLoader.buildSupportsSnappy()</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/spark-on-yarn-java-lang-UnsatisfiedLinkError/m-p/22759#M4114</link>
      <description>&lt;P&gt;The solution I found was to add the following environment variables to spark-env.sh. &amp;nbsp;The first 2 lines make spark-shell able to read snappy files from when run in local mode and the third makes it possible for spark-shell to read snappy files when in yarn mode.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;export JAVA_LIBRARY_PATH=$JAVA_LIBRARY_PATH:/usr/lib/hadoop/lib/native
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/hadoop/lib/native
export SPARK_YARN_USER_ENV="JAVA_LIBRARY_PATH=$JAVA_LIBRARY_PATH,LD_LIBRARY_PATH=$LD_LIBRARY_PATH"&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 16 Dec 2014 23:25:04 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/spark-on-yarn-java-lang-UnsatisfiedLinkError/m-p/22759#M4114</guid>
      <dc:creator>rdh</dc:creator>
      <dc:date>2014-12-16T23:25:04Z</dc:date>
    </item>
    <item>
      <title>Re: spark on yarn: java.lang.UnsatisfiedLinkError: ... NativeCodeLoader.buildSupportsSnappy()</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/spark-on-yarn-java-lang-UnsatisfiedLinkError/m-p/23010#M4115</link>
      <description>&lt;P&gt;You can include the below in spark-defaults.conf&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;spark.driver.extraLibraryPath&amp;nbsp;&amp;nbsp; native path($HADOOP_HOME/lib/native/)&lt;/P&gt;</description>
      <pubDate>Tue, 23 Dec 2014 10:54:52 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/spark-on-yarn-java-lang-UnsatisfiedLinkError/m-p/23010#M4115</guid>
      <dc:creator>Sivaa2015</dc:creator>
      <dc:date>2014-12-23T10:54:52Z</dc:date>
    </item>
    <item>
      <title>Re: spark on yarn: java.lang.UnsatisfiedLinkError: ... NativeCodeLoader.buildSupportsSnappy()</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/spark-on-yarn-java-lang-UnsatisfiedLinkError/m-p/23022#M4116</link>
      <description>&lt;P&gt;I tried that. &amp;nbsp;It didn't work.&lt;/P&gt;</description>
      <pubDate>Tue, 23 Dec 2014 14:21:34 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/spark-on-yarn-java-lang-UnsatisfiedLinkError/m-p/23022#M4116</guid>
      <dc:creator>rdh</dc:creator>
      <dc:date>2014-12-23T14:21:34Z</dc:date>
    </item>
    <item>
      <title>Re: spark on yarn: java.lang.UnsatisfiedLinkError: ... NativeCodeLoader.buildSupportsSnappy()</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/spark-on-yarn-java-lang-UnsatisfiedLinkError/m-p/27036#M4117</link>
      <description>&lt;P&gt;I saw Hadoop load native libraries in the running user's home onto the classpath.&amp;nbsp; Maybe the same thing is happening to you with Spark.&amp;nbsp; Check your home for&lt;/P&gt;&lt;PRE&gt;ls ~/lib*
libhadoop.a       libhadoop.so        libhadooputils.a  libsnappy.so    libsnappy.so.1.1.3
libhadooppipes.a  libhadoop.so.1.0.0  libhdfs.a         libsnappy.so.1&lt;/PRE&gt;&lt;P&gt;and delete them if found.&amp;nbsp; I could be totally off, but this was the culprit in our case.&lt;/P&gt;</description>
      <pubDate>Tue, 05 May 2015 20:05:20 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/spark-on-yarn-java-lang-UnsatisfiedLinkError/m-p/27036#M4117</guid>
      <dc:creator>kwitt.ebay</dc:creator>
      <dc:date>2015-05-05T20:05:20Z</dc:date>
    </item>
  </channel>
</rss>

