Support Questions

Find answers, ask questions, and share your expertise

Who agreed with this topic

Spark Streaming Fails on Cluster mode ( Flume as source)

avatar
Expert Contributor

i am using spark streaming , event count example , flume as source of avro events , everything works fine when executing spark on local mode , but when i try to run the example on my cluster i got failed to bind error , 

 

Command line thats working " local mode " 

 

spark-submit --class "WordCount" --master local[*] --jars /opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/lib/spark/lib/spark-streaming-flume_2.10-1.2.0.jar,/opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/lib/flume-ng/lib/avro-ipc.jar,/opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/lib/flume-ng/lib/flume-ng-sdk-1.5.0-cdh5.3.1.jar /usr/local/WordCount/target/scala-2.10/wordcount_2.10-1.0.jar node01 6789

 

Command line that's not working " cluster mode "

 

spark-submit --class "WordCount" --master spark://node01:7077 --jars /opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/lib/spark/lib/spark-streaming-flume_2.10-1.2.0.jar,/opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/lib/flume-ng/lib/avro-ipc.jar,/opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/lib/flume-ng/lib/flume-ng-sdk-1.5.0-cdh5.3.1.jar /usr/local/WordCount/target/scala-2.10/wordcount_2.10-1.0.jar node01 6789

 

Error :

 

ERROR ReceiverTracker: Deregistered receiver for stream 0: Error starting receiver 0 - org.jboss.netty.channel.ChannelException: Failed to bind to: /192.168.168.94:6789
        at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
        at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:106)
        at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:119)
        at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:74)
        at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:68)
        at org.apache.spark.streaming.flume.FlumeReceiver.initServer(FlumeInputDStream.scala:164)
        at org.apache.spark.streaming.flume.FlumeReceiver.onStart(FlumeInputDStream.scala:171)
        at org.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(ReceiverSupervisor.scala:121)
        at org.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverSupervisor.scala:106)
        at org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$8.apply(ReceiverTracker.scala:277)
        at org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$8.apply(ReceiverTracker.scala:269)
        at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1314)
        at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1314)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
        at org.apache.spark.scheduler.Task.run(Task.scala:56)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)
Caused by: java.net.BindException: Cannot assign requested address
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:444)
        at sun.nio.ch.Net.bind(Net.java:436)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193)
        at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:366)
        at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:290)
        at org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)
        at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
        at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
        ... 3 more

 

 

Who agreed with this topic