Sandbox HDP-2.5.0 TP Spark 1.6.2 - I am encounterning the following ERROR GPLNativeCodeLoader: Could not load native gpl library - ERROR LzoCodec: Cannot load native-lzo without native-hadoop
while running a simple word count on spark-shell
[root@sandbox ~]# cd $SPARK_HOME
[root@sandbox spark-client]# ./bin/spark-shell --master yarn-client --driver-memory 512m --executor-memory 512m --jars /us r/hdp/2.5.0.0-817/hadoop/lib/hadoop-lzo-0.6.0.2.5.0.0-817.jar
The following code is submitted at the Spark CLI
- val file = sc.textFile("/tmp/data")
- val counts = file.flatMap(line => line.split(" ")).map(word =>(word,1)).
- reduceByKey(_ + _)
- counts.saveAsTextFile("/tmp/wordcount")
This yields the following error:
ERROR GPLNativeCodeLoader: Could not load native gpl library
ERROR LzoCodec: Cannot load native-lzo without native-hadoop
The same error appear with or without adding the --jars parameter as here under:
--jars /us r/hdp/2.5.0.0-817/hadoop/lib/hadoop-lzo-0.6.0.2.5.0.0-817.jar
Full Log:
- [root@sandbox ~]# cd $SPARK_HOME
- [root@sandbox spark-client]# ./bin/spark-shell --master yarn-client --driver-memory 512m --executor-memory 512m --jars /us
- r/hdp/2.5.0.0-817/hadoop/lib/hadoop-lzo-0.6.0.2.5.0.0-817.jar
- 16/08/2716:28:23 INFO SecurityManager:Changing view acls to: root
- 16/08/2716:28:23 INFO SecurityManager:Changing modify acls to: root
- 16/08/2716:28:23 INFO SecurityManager:SecurityManager: authentication disabled; ui acls disabled; users with view permis
- sions:Set(root); users with modify permissions:Set(root)
- 16/08/2716:28:23 INFO HttpServer:Starting HTTP Server
- 16/08/2716:28:23 INFO Server: jetty-8.y.z-SNAPSHOT
- 16/08/2716:28:23 INFO AbstractConnector:StartedSocketConnector@0.0.0.0:43011
- 16/08/2716:28:23 INFO Utils:Successfully started service 'HTTP class server' on port 43011.
- Welcome to
- ____ __
- / __/__ ___ _____/ /__
- _\ \/ _ \/ _ `/ __/'_/
- /___/ .__/\_,_/_/ /_/\_\ version 1.6.2
- /_/
- Using Scala version 2.10.5 (OpenJDK 64-Bit Server VM, Java 1.7.0_101)
- Type in expressions to have them evaluated.
- Type :help for more information.
- 16/08/27 16:28:26 INFO SparkContext: Running Spark version 1.6.2
- 16/08/27 16:28:26 INFO SecurityManager: Changing view acls to: root
- 16/08/27 16:28:26 INFO SecurityManager: Changing modify acls to: root
- 16/08/27 16:28:26 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permis
- sions: Set(root); users with modify permissions: Set(root)
- 16/08/27 16:28:26 INFO Utils: Successfully started service 'sparkDriver' on port 45506.
- 16/08/27 16:28:27 INFO Slf4jLogger: Slf4jLogger started
- 16/08/27 16:28:27 INFO Remoting: Starting remoting
- 16/08/27 16:28:27 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@10.0.2.15:44
- 829]
- 16/08/27 16:28:27 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 44829.
- 16/08/27 16:28:27 INFO SparkEnv: Registering MapOutputTracker
- 16/08/27 16:28:27 INFO SparkEnv: Registering BlockManagerMaster
- 16/08/27 16:28:27 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-0776b175-5dd7-49b9-adf7-f2cbd85a1e1b
- 16/08/27 16:28:27 INFO MemoryStore: MemoryStore started with capacity 143.6 MB
- 16/08/27 16:28:27 INFO SparkEnv: Registering OutputCommitCoordinator
- 16/08/27 16:28:27 INFO Server: jetty-8.y.z-SNAPSHOT
- 16/08/27 16:28:27 INFO AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
- 16/08/27 16:28:27 INFO Utils: Successfully started service 'SparkUI' on port 4040.
- 16/08/27 16:28:27 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.0.2.15:4040
- 16/08/27 16:28:27 INFO HttpFileServer: HTTP File server directory is /tmp/spark-61ecb98e-989c-4396-9b30-032c4d5a2b90/httpd
- -857ce699-7db0-428c-9af5-1dca4ec5330d
- 16/08/27 16:28:27 INFO HttpServer: Starting HTTP Server
- 16/08/27 16:28:27 INFO Server: jetty-8.y.z-SNAPSHOT
- 16/08/27 16:28:27 INFO AbstractConnector: Started SocketConnector@0.0.0.0:37515
- 16/08/27 16:28:27 INFO Utils: Successfully started service 'HTTP file server' on port 37515.
- 16/08/27 16:28:27 INFO SparkContext: Added JAR file:/usr/hdp/2.5.0.0-817/hadoop/lib/hadoop-lzo-0.6.0.2.5.0.0-817.jar at ht
- tp://10.0.2.15:37515/jars/hadoop-lzo-0.6.0.2.5.0.0-817.jar with timestamp 1472315307772
- spark.yarn.driver.memoryOverhead is set but does not apply in client mode.
- 16/08/27 16:28:28 INFO TimelineClientImpl: Timeline service address: http://sandbox.hortonworks.com:8188/ws/v1/timeline/
- 16/08/27 16:28:28 INFO RMProxy: Connecting to ResourceManager at sandbox.hortonworks.com/10.0.2.15:8050
- 16/08/27 16:28:28 INFO Client: Requesting a new application from cluster with 1 NodeManagers
- 16/08/27 16:28:28 INFO Client: Verifying our application has not requested more than the maximum memory capability of the
- cluster (2250 MB per container)
- 16/08/27 16:28:28 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
- 16/08/27 16:28:28 INFO Client: Setting up container launch context for our AM
- 16/08/27 16:28:28 INFO Client: Setting up the launch environment for our AM container
- 16/08/27 16:28:28 INFO Client: Using the spark assembly jar on HDFS because you are using HDP, defaultSparkAssembly:hdfs:/
- /sandbox.hortonworks.com:8020/hdp/apps/2.5.0.0-817/spark/spark-hdp-assembly.jar
- 16/08/27 16:28:28 INFO Client: Preparing resources for our AM container
- 16/08/27 16:28:28 INFO Client: Using the spark assembly jar on HDFS because you are using HDP, defaultSparkAssembly:hdfs:/
- /sandbox.hortonworks.com:8020/hdp/apps/2.5.0.0-817/spark/spark-hdp-assembly.jar
- 16/08/27 16:28:28 INFO Client: Source and destination file systems are the same. Not copying hdfs://sandbox.hortonworks.co
- m:8020/hdp/apps/2.5.0.0-817/spark/spark-hdp-assembly.jar
- 16/08/27 16:28:29 INFO Client: Uploading resource file:/tmp/spark-61ecb98e-989c-4396-9b30-032c4d5a2b90/__spark_conf__50848
- 04354575467223.zip -> hdfs://sandbox.hortonworks.com:8020/user/root/.sparkStaging/application_1472312154461_0006/__spark_c
- onf__5084804354575467223.zip
- 16/08/27 16:28:29 INFO SecurityManager: Changing view acls to: root
- 16/08/27 16:28:29 INFO SecurityManager: Changing modify acls to: root
- 16/08/27 16:28:29 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permis
- sions: Set(root); users with modify permissions: Set(root)
- 16/08/27 16:28:29 INFO Client: Submitting application 6 to ResourceManager
- 16/08/27 16:28:29 INFO YarnClientImpl: Submitted application application_1472312154461_0006
- 16/08/27 16:28:29 INFO SchedulerExtensionServices: Starting Yarn extension services with app application_1472312154461_000
- 6 and attemptId None
- 16/08/27 16:28:30 INFO Client: Application report for application_1472312154461_0006 (state: ACCEPTED)
- 16/08/27 16:28:30 INFO Client:
- client token: N/A
- diagnostics: AM container is launched, waiting for AM container to Register with RM
- ApplicationMaster host: N/A
- ApplicationMaster RPC port: -1
- queue: default
- start time: 1472315309252
- final status: UNDEFINED
- tracking URL: http://sandbox.hortonworks.com:8088/proxy/application_1472312154461_0006/
- user: root
- 16/08/27 16:28:31 INFO Client: Application report for application_1472312154461_0006 (state: ACCEPTED)
- 16/08/27 16:28:32 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(nul
- l)
- 16/08/27 16:28:32 INFO YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpF
- ilter, Map(PROXY_HOSTS -> sandbox.hortonworks.com, PROXY_URI_BASES -> http://sandbox.hortonworks.com:8088/proxy/applicatio
- n_1472312154461_0006), /proxy/application_1472312154461_0006
- 16/08/27 16:28:32 INFO JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
- 16/08/27 16:28:32 INFO Client: Application report for application_1472312154461_0006 (state: RUNNING)
- 16/08/27 16:28:32 INFO Client:
- client token: N/A
- diagnostics: N/A
- ApplicationMaster host: 10.0.2.15
- ApplicationMaster RPC port: 0
- queue: default
- start time: 1472315309252
- final status: UNDEFINED
- tracking URL: http://sandbox.hortonworks.com:8088/proxy/application_1472312154461_0006/
- user: root
- 16/08/27 16:28:32 INFO YarnClientSchedulerBackend: Application application_1472312154461_0006 has started running.
- 16/08/27 16:28:32 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on p
- ort 34124.
- 16/08/27 16:28:32 INFO NettyBlockTransferService: Server created on 34124
- 16/08/27 16:28:32 INFO BlockManagerMaster: Trying to register BlockManager
- 16/08/27 16:28:32 INFO BlockManagerMasterEndpoint: Registering block manager 10.0.2.15:34124 with 143.6 MB RAM, BlockManag
- erId(driver, 10.0.2.15, 34124)
- 16/08/27 16:28:32 INFO BlockManagerMaster: Registered BlockManager
- 16/08/27 16:28:32 INFO EventLoggingListener: Logging events to hdfs:///spark-history/application_1472312154461_0006
- 16/08/27 16:28:36 INFO YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (sandbox.hortonworks.com:
- 39728) with ID 1
- 16/08/27 16:28:36 INFO BlockManagerMasterEndpoint: Registering block manager sandbox.hortonworks.com:38362 with 143.6 MB R
- AM, BlockManagerId(1, sandbox.hortonworks.com, 38362)
- 16/08/27 16:28:57 INFO YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxReg
- isteredResourcesWaitingTime: 30000(ms)
- 16/08/27 16:28:57 INFO SparkILoop: Created spark context..
- Spark context available as sc.
- 16/08/27 16:28:58 INFO HiveContext: Initializing execution hive, version 1.2.1
- 16/08/27 16:28:58 INFO ClientWrapper: Inspected Hadoop version: 2.7.1.2.5.0.0-817
- 16/08/27 16:28:58 INFO ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.7.1.2.5.0.0-8
- 17
- 16/08/27 16:28:58 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.Objec
- tStore
- 16/08/27 16:28:58 INFO ObjectStore: ObjectStore, initialize called
- 16/08/27 16:28:58 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored
- 16/08/27 16:28:58 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
- 16/08/27 16:28:59 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
- 16/08/27 16:28:59 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
- 16/08/27 16:29:00 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,Stor
- ageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
- 16/08/27 16:29:01 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-o
- nly" so does not have its own datastore table.
- 16/08/27 16:29:01 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" s
- o does not have its own datastore table.
- 16/08/27 16:29:02 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-o
- nly" so does not have its own datastore table.
- 16/08/27 16:29:02 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" s
- o does not have its own datastore table.
- 16/08/27 16:29:02 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
- 16/08/27 16:29:02 INFO ObjectStore: Initialized ObjectStore
- 16/08/27 16:29:02 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not
- enabled so recording the schema version 1.2.0
- 16/08/27 16:29:02 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
- 16/08/27 16:29:03 INFO HiveMetaStore: Added admin role in metastore
- 16/08/27 16:29:03 INFO HiveMetaStore: Added public role in metastore
- 16/08/27 16:29:03 INFO HiveMetaStore: No user is added in admin role, since config is empty
- 16/08/27 16:29:03 INFO HiveMetaStore: 0: get_all_databases
- 16/08/27 16:29:03 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_all_databases
- 16/08/27 16:29:03 INFO HiveMetaStore: 0: get_functions: db=default pat=*
- 16/08/27 16:29:03 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_functions: db=default pat=*
- 16/08/27 16:29:03 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-o
- nly" so does not have its own datastore table.
- 16/08/27 16:29:03 INFO SessionState: Created local directory: /tmp/6ebb0a60-b229-4dad-94a3-e2386ba7b4ec_resources
- 16/08/27 16:29:03 INFO SessionState: Created HDFS directory: /tmp/hive/root/6ebb0a60-b229-4dad-94a3-e2386ba7b4ec
- 16/08/27 16:29:03 INFO SessionState: Created local directory: /tmp/root/6ebb0a60-b229-4dad-94a3-e2386ba7b4ec
- 16/08/27 16:29:03 INFO SessionState: Created HDFS directory: /tmp/hive/root/6ebb0a60-b229-4dad-94a3-e2386ba7b4ec/_tmp_spac
- e.db
- 16/08/27 16:29:03 INFO HiveContext: default warehouse location is /user/hive/warehouse
- 16/08/27 16:29:03 INFO HiveContext: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.
- 16/08/27 16:29:03 INFO ClientWrapper: Inspected Hadoop version: 2.7.1.2.5.0.0-817
- 16/08/27 16:29:03 INFO ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.7.1.2.5.0.0-8
- 17
- 16/08/27 16:29:04 INFO metastore: Trying to connect to metastore with URI thrift://sandbox.hortonworks.com:9083
- 16/08/27 16:29:04 INFO metastore: Connected to metastore.
- 16/08/27 16:29:04 INFO SessionState: Created local directory: /tmp/83a1e2d3-8c24-4f12-9841-fab259a77514_resources
- 16/08/27 16:29:04 INFO SessionState: Created HDFS directory: /tmp/hive/root/83a1e2d3-8c24-4f12-9841-fab259a77514
- 16/08/27 16:29:04 INFO SessionState: Created local directory: /tmp/root/83a1e2d3-8c24-4f12-9841-fab259a77514
- 16/08/27 16:29:04 INFO SessionState: Created HDFS directory: /tmp/hive/root/83a1e2d3-8c24-4f12-9841-fab259a77514/_tmp_spac
- e.db
- 16/08/27 16:29:04 INFO SparkILoop: Created sql context (with Hive support)..
- SQL context available as sqlContext.
- scala> val file = sc.textFile("/tmp/data")
- 16/08/27 16:29:20 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 234.8 KB, free 234.8 KB)
- 16/08/27 16:29:20 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 28.1 KB, free 262.9
- KB)
- 16/08/27 16:29:20 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.0.2.15:34124 (size: 28.1 KB, free: 143.6
- MB)
- 16/08/27 16:29:20 INFO SparkContext: Created broadcast 0 from textFile at <console>:27
- file: org.apache.spark.rdd.RDD[String] = /tmp/data MapPartitionsRDD[1] at textFile at <console>:27
- scala> val counts = file.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey(_ + _)
- 16/08/27 16:29:35 ERROR GPLNativeCodeLoader: Could not load native gpl library
- java.lang.UnsatisfiedLinkError: no gplcompression in java.library.path
- at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1889)
- at java.lang.Runtime.loadLibrary0(Runtime.java:849)
- at java.lang.System.loadLibrary(System.java:1088)
- at com.hadoop.compression.lzo.GPLNativeCodeLoader.<clinit>(GPLNativeCodeLoader.java:32)
- at com.hadoop.compression.lzo.LzoCodec.<clinit>(LzoCodec.java:71)
- at java.lang.Class.forName0(Native Method)
- at java.lang.Class.forName(Class.java:278)
- at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2147)
- at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2112)
- at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:132)
- at org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:179)
- at org.apache.hadoop.mapred.TextInputFormat.configure(TextInputFormat.java:45)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:606)
- at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
- at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78)
- at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
- at org.apache.spark.rdd.HadoopRDD.getInputFormat(HadoopRDD.scala:189)
- at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202)
- at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:242)
- at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:240)
- at scala.Option.getOrElse(Option.scala:120)
- at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
- at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
- at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:242)
- at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:240)
- at scala.Option.getOrElse(Option.scala:120)
- at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
- at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
- at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:242)
- at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:240)
- at scala.Option.getOrElse(Option.scala:120)
- at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
- at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
- at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:242)
- at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:240)
- at scala.Option.getOrElse(Option.scala:120)
- at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
- at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:65)
- at org.apache.spark.rdd.PairRDDFunctions$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:331)
- at org.apache.spark.rdd.PairRDDFunctions$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:331)
- at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
- at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
- at org.apache.spark.rdd.RDD.withScope(RDD.scala:323)
- at org.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:330)
- at $line19.$read$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:29)
- at $line19.$read$iwC$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:34)
- at $line19.$read$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:36)
- at $line19.$read$iwC$iwC$iwC$iwC$iwC.<init>(<console>:38)
- at $line19.$read$iwC$iwC$iwC$iwC.<init>(<console>:40)
- at $line19.$read$iwC$iwC$iwC.<init>(<console>:42)
- at $line19.$read$iwC$iwC.<init>(<console>:44)
- at $line19.$read$iwC.<init>(<console>:46)
- at $line19.$read.<init>(<console>:48)
- at $line19.$read$.<init>(<console>:52)
- at $line19.$read$.<clinit>(<console>)
- at $line19.$eval$.<init>(<console>:7)
- at $line19.$eval$.<clinit>(<console>)
- at $line19.$eval.$print(<console>)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:606)
- at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
- at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
- at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
- at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
- at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
- at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
- at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
- at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
- at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
- at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
- at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$loop(SparkILoop.scala:670)
- at org.apache.spark.repl.SparkILoop$anonfun$org$apache$spark$repl$SparkILoop$process$1.apply$mcZ$sp(SparkILoop.s
- cala:997)
- at org.apache.spark.repl.SparkILoop$anonfun$org$apache$spark$repl$SparkILoop$process$1.apply(SparkILoop.scala:94
- 5)
- at org.apache.spark.repl.SparkILoop$anonfun$org$apache$spark$repl$SparkILoop$process$1.apply(SparkILoop.scala:94
- 5)
- at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
- at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$process(SparkILoop.scala:945)
- at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
- at org.apache.spark.repl.Main$.main(Main.scala:31)
- at org.apache.spark.repl.Main.main(Main.scala)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:606)
- at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$runMain(SparkSubmit.scala:731)
- at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
- at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
- at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
- at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
- 16/08/27 16:29:35 ERROR LzoCodec: Cannot load native-lzo without native-hadoop
- 16/08/27 16:29:35 INFO FileInputFormat: Total input paths to process : 1
- counts: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[4] at reduceByKey at <console>:29
- scala>
Please help to fix this issue.