Member since
01-25-2016
19
Posts
5
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
14831 | 02-02-2016 08:02 AM |
02-02-2016
08:02 AM
1 Kudo
@Artem Ervits This is still an outstanding issue with 1.5.2. No workaround has been found. However, I have now upgraded to Spark 1.6 integrated with Zeppelin, which works fine.
... View more
01-29-2016
02:26 PM
@Neeraj Sabharwal I was thinking about the same 😉 it works fine when I run sparkpi directly in the cli, without going to zeppelin. ./bin/spark-submit --class org.apache.spark.examples.SparkPi--master yarn-client --num-executors 3--driver-memory 512m--executor-memory 512m--executor-cores 1 lib/spark-examples*.jar 10 Also, the issue has been closed with a status "Cannot Reproduce" So not sure on what the options I would have now to get zeppelin going. Any ideas?
... View more
01-29-2016
01:54 PM
@Neeraj Sabharwal tcp 0 0 0.0.0.0:9995 0.0.0.0:* LISTEN 26340/java tcp 0 0 10.87.139.168:9995 10.25.35.165:55024 ESTABLISHED 26340/java Please see the below attempt log. could it be because of this line? Unknown/unsupported param List(--num-executors, 2)
... View more
01-29-2016
01:40 PM
@Neeraj Sabharwal zeppelin-env.sh already contains export ZEPPELIN_PORT=9995. Container creation is failing
... View more
01-29-2016
01:36 PM
1 Kudo
zeppelin-env.sh has been updated with the below configuration. export HADOOP_CONF_DIR=/etc/hadoop/conf export ZEPPELIN_PORT=9995 export ZEPPELIN_JAVA_OPTS="-Dhdp.version=2.3.4.0-3485" Just wondering whether Zeppelin is not able to pick Hadoop configuration properly and henc failing.
... View more
01-29-2016
01:33 PM
Spark has been installed from Ambari. However, Zeppelin is being installed from the rpm package. Tech Preview 0.6.0 version
... View more
01-29-2016
01:32 PM
Thanks @Neeraj Sabharwal I just figued out that it started running on port 4044. Looks like its a container launch issue. When I login to the RM UI, and looking at the attempt log, I am seeing the below issue. AM Container for appattempt_1453983048063_0058_000002 exited with exitCode: 1 Diagnostics: Exception from container-launch.
Container id: container_e08_1453983048063_0058_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1: Trying to understand why container launch is failing. Any ideas please? Thanks
... View more
01-29-2016
12:46 PM
@Neeraj Sabharwal Looking further at the below zeppelin logs, my understanding is that when Zeppelin tried to create a spark context, Spark tries to use port 4040 for launching the spark context's Web UI. However as the port is already in use on the machine, it couldnt launch the spark context. WARN [2016-01-29 12:34:27,375] ({pool-2-thread-3} AbstractLifeCycle.java[setFailed]:204) - FAILED SelectChannelConnector@0.0.0.0:4040: java.net.BindException: Address already in use
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.spark-project.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
at org.spark-project.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
at org.spark-project.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
at org.spark-project.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at org.spark-project.jetty.server.Server.doStart(Server.java:293)
at org.spark-project.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at org.apache.spark.ui.JettyUtils$.org$apache$spark$ui$JettyUtils$connect$1(JettyUtils.scala:228)
at org.apache.spark.ui.JettyUtils$anonfun$2.apply(JettyUtils.scala:238)
at org.apache.spark.ui.JettyUtils$anonfun$2.apply(JettyUtils.scala:238)
at org.apache.spark.util.Utils$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:1991)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1982)
at org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:238)
at org.apache.spark.ui.WebUI.bind(WebUI.scala:117)
at org.apache.spark.SparkContext$anonfun$13.apply(SparkContext.scala:448)
at org.apache.spark.SparkContext$anonfun$13.apply(SparkContext.scala:448)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:448)
at org.apache.zeppelin.spark.SparkInterpreter.createSparkContext(SparkInterpreter.java:339)
at org.apache.zeppelin.spark.SparkInterpreter.getSparkContext(SparkInterpreter.java:149)
at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:465)
at org.apache.zeppelin.interpreter.ClassloaderInterpreter.open(ClassloaderInterpreter.java:74)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:68)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:92)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:276)
at org.apache.zeppelin.scheduler.Job.run(Job.java:170)
at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:118)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) Any idea how to change for Zeppelin to use another port instead of 4040. I belive Spark has the mechanism to try 4041, 4042 etc when two two shells are running on the machine and both compete for the same port. However, does Zeppelin does the same?
... View more
01-29-2016
12:03 PM
@Neeraj Sabharwal
Cluster is up and running. I am able to run terasort programs for YARN. Also able to run SparkPi example from spark shell. ./bin/spark-submit --class org.apache.spark.examples.SparkPi--master yarn-client --num-executors 3--driver-memory 512m--executor-memory 512m--executor-cores 1 lib/spark-examples*.jar 10
... View more