<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: java.lang.RuntimeException: native-lzo library not available Error on CDH 5.3 with Spark 1.2 in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/java-lang-RuntimeException-native-lzo-library-not-available/m-p/66957#M22790</link>
    <description>&lt;P&gt;Hi Guys,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I'm running into issue where my spark jobs are failing on the below error, I'm using Spark 1.6.0 with CDH 5.13.0.&lt;/P&gt;&lt;P&gt;I tried to figure it out with no success.&lt;/P&gt;&lt;P&gt;Will appreciate any help or a direction how to attack this issue.&lt;/P&gt;&lt;P&gt;User class threw exception: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 3, xxxxxx, executor 1): java.lang.RuntimeException: native-lzo library not available&lt;BR /&gt;at com.hadoop.compression.lzo.LzoCodec.getDecompressorType(LzoCodec.java:193)&lt;BR /&gt;at org.apache.hadoop.io.compress.CodecPool.getDecompressor(CodecPool.java:181)&lt;BR /&gt;at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1995)&lt;BR /&gt;at org.apache.hadoop.io.SequenceFile$Reader.initialize(SequenceFile.java:1881)&lt;BR /&gt;at org.apache.hadoop.io.SequenceFile$Reader.&amp;lt;init&amp;gt;(SequenceFile.java:1830)&lt;BR /&gt;at org.apache.hadoop.io.SequenceFile$Reader.&amp;lt;init&amp;gt;(SequenceFile.java:1844)&lt;BR /&gt;at org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.initialize(SequenceFileRecordReader.java:54)&lt;BR /&gt;at com.liveperson.dallas.lp.utils.incremental.DallasGenericTextFileRecordReader.initialize(DallasGenericTextFileRecordReader.java:64)&lt;BR /&gt;at com.liveperson.hadoop.fs.inputs.LPCombineFileRecordReaderWrapper.initialize(LPCombineFileRecordReaderWrapper.java:38)&lt;BR /&gt;at org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.initialize(CombineFileRecordReader.java:63)&lt;BR /&gt;at org.apache.spark.rdd.NewHadoopRDD$$anon$1.&amp;lt;init&amp;gt;(NewHadoopRDD.scala:168)&lt;BR /&gt;at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:133)&lt;BR /&gt;at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:65)&lt;BR /&gt;at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)&lt;BR /&gt;at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)&lt;BR /&gt;at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)&lt;BR /&gt;at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)&lt;BR /&gt;at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)&lt;BR /&gt;at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)&lt;BR /&gt;at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)&lt;BR /&gt;at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)&lt;BR /&gt;at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)&lt;BR /&gt;at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)&lt;BR /&gt;at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)&lt;BR /&gt;at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)&lt;BR /&gt;at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)&lt;BR /&gt;at org.apache.spark.scheduler.Task.run(Task.scala:89)&lt;BR /&gt;at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:242)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)&lt;BR /&gt;at java.lang.Thread.run(Thread.java:745)&lt;BR /&gt;Driver stacktrace:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I see the LZO at GPextras:&lt;/P&gt;&lt;P&gt;ll&lt;BR /&gt;total 104&lt;BR /&gt;-rw-r--r-- 1 cloudera-scm cloudera-scm 35308 Oct 4 2017 COPYING.hadoop-lzo&lt;BR /&gt;-rw-r--r-- 1 cloudera-scm cloudera-scm 62268 Oct 4 2017 hadoop-lzo-0.4.15-cdh5.13.0.jar&lt;BR /&gt;lrwxrwxrwx 1 cloudera-scm cloudera-scm 31 May 3 07:23 hadoop-lzo.jar -&amp;gt; hadoop-lzo-0.4.15-cdh5.13.0.jar&lt;BR /&gt;drwxr-xr-x 2 cloudera-scm cloudera-scm 4096 Oct 4 2017 native&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;i see only lzo only for impala&lt;/P&gt;&lt;P&gt;[root@xxxxxxx ~]# locate *lzo*.so*&lt;BR /&gt;/opt/cloudera/parcels/GPLEXTRAS-5.13.0-1.cdh5.13.0.p0.29/lib/impala/lib/libimpalalzo.so&lt;BR /&gt;/usr/lib64/liblzo2.so.2&lt;BR /&gt;/usr/lib64/liblzo2.so.2.0.0&lt;/P&gt;&lt;P&gt;the /opt/cloudera/parcels/GPLEXTRAS-5.13.0-1.cdh5.13.0.p0.29/lib/hadoop/lib/native has :&lt;/P&gt;&lt;P&gt;-rwxr-xr-x 1 cloudera-scm cloudera-scm 22918 Oct 4 2017 libgplcompression.a&lt;BR /&gt;-rwxr-xr-x 1 cloudera-scm cloudera-scm 1204 Oct 4 2017 libgplcompression.la&lt;BR /&gt;-rwxr-xr-x 1 cloudera-scm cloudera-scm 1205 Oct 4 2017 libgplcompression.lai&lt;BR /&gt;-rwxr-xr-x 1 cloudera-scm cloudera-scm 15760 Oct 4 2017 libgplcompression.so&lt;BR /&gt;-rwxr-xr-x 1 cloudera-scm cloudera-scm 15768 Oct 4 2017 libgplcompression.so.0&lt;BR /&gt;-rwxr-xr-x 1 cloudera-scm cloudera-scm 15768 Oct 4 2017 libgplcompression.so.0.0.0&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;and /opt/cloudera/parcels/GPLEXTRAS-5.13.0-1.cdh5.13.0.p0.29/lib/spark-netlib/lib has:&lt;/P&gt;&lt;P&gt;-rw-r--r-- 1 cloudera-scm cloudera-scm 8673 Oct 4 2017 jniloader-1.1.jar&lt;BR /&gt;-rw-r--r-- 1 cloudera-scm cloudera-scm 53249 Oct 4 2017 native_ref-java-1.1.jar&lt;BR /&gt;-rw-r--r-- 1 cloudera-scm cloudera-scm 53295 Oct 4 2017 native_system-java-1.1.jar&lt;BR /&gt;-rw-r--r-- 1 cloudera-scm cloudera-scm 1732268 Oct 4 2017 netlib-native_ref-linux-x86_64-1.1-natives.jar&lt;BR /&gt;-rw-r--r-- 1 cloudera-scm cloudera-scm 446694 Oct 4 2017 netlib-native_system-linux-x86_64-1.1-natives.jar&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Note: The issue occuring only with the spark job, mapreduce job working fine.&lt;/P&gt;</description>
    <pubDate>Fri, 04 May 2018 05:56:54 GMT</pubDate>
    <dc:creator>Fawze</dc:creator>
    <dc:date>2018-05-04T05:56:54Z</dc:date>
  </channel>
</rss>

