Member since
12-21-2015
43
Posts
10
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3368 | 12-08-2016 12:33 AM | |
6461 | 01-29-2016 08:44 PM | |
4282 | 01-28-2016 10:48 PM |
04-21-2017
03:27 PM
I could re-create the same issue on hdp 2.5 sandbox. it looks to me a bug.
... View more
04-07-2017
07:52 PM
I added: export ZEPPELIN_MEM="-Xms1024m -Xmx2048m -XX:MaxMetaspaceSize=512m" The out of memory issue was fixed; however, I got another new issue related to yarn configuration. I will ask it with "new question". Thanks!
... View more
04-07-2017
06:25 PM
Yes, that's right. On Ambari, should we add zeppelin_mem, let's say -XmXXXXm to Custom zeppelin-env?
... View more
04-07-2017
06:16 PM
log file says: INFO [2017-04-07 11:53:58,682] ({pool-2-thread-4} SchedulerFactory.java[jobStarted]:131) - Job remoteInterpretJob_1491580438681 started by scheduler org.apache.zeppelin.spark.SparkInterpreter283573820
INFO [2017-04-07 11:54:02,145] ({pool-2-thread-4} Logging.scala[logInfo]:58) - Changing view acls to: zeppelin
INFO [2017-04-07 11:54:02,145] ({pool-2-thread-4} Logging.scala[logInfo]:58) - Changing modify acls to: zeppelin
INFO [2017-04-07 11:54:02,145] ({pool-2-thread-4} Logging.scala[logInfo]:58) - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(zeppelin); users with modify permissions: Set(zeppelin)
INFO [2017-04-07 11:54:05,322] ({pool-2-thread-4} Logging.scala[logInfo]:58) - Starting HTTP Server
INFO [2017-04-07 11:54:05,945] ({pool-2-thread-4} Server.java[doStart]:272) - jetty-8.y.z-SNAPSHOT
INFO [2017-04-07 11:54:08,337] ({pool-2-thread-4} AbstractConnector.java[doStart]:338) - Started SocketConnector@0.0.0.0:22900
INFO [2017-04-07 11:54:08,338] ({pool-2-thread-4} Logging.scala[logInfo]:58) - Successfully started service 'HTTP class server' on port 22900.
ERROR [2017-04-07 11:54:10,658] ({pool-2-thread-4} Job.java[run]:189) - Job failed
java.lang.OutOfMemoryError: GC overhead limit exceeded
INFO [2017-04-07 11:54:10,658] ({pool-2-thread-4} SchedulerFactory.java[jobFinished]:137) - Job remoteInterpretJob_1491580438681 finished by scheduler org.apache.zeppelin.spark.SparkInterpreter283573820
INFO [2017-04-07 11:54:11,954] ({pool-1-thread-18} Logging.scala[logInfo]:58) - Changing view acls to: zeppelin
INFO [2017-04-07 11:54:11,954] ({pool-1-thread-18} Logging.scala[logInfo]:58) - Changing modify acls to: zeppelin
INFO [2017-04-07 11:54:11,955] ({pool-1-thread-18} Logging.scala[logInfo]:58) - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(zeppelin); users with modify permissions: Set(zeppelin) "xxx.out" file says: Exception in thread "qtp686649452-427" java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.addConditionWaiter(AbstractQueuedSynchronizer.java:1855)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2068)
at org.spark-project.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:342)
at org.spark-project.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:526)
at org.spark-project.jetty.util.thread.QueuedThreadPool.access$600(QueuedThreadPool.java:44)
at org.spark-project.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
at java.lang.Thread.run(Thread.java:745)
Exception in thread "pool-1-thread-4"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "pool-1-thread-4"
Exception in thread "pool-1-thread-7" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "pool-1-thread-6" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "pool-1-thread-9" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "qtp1834748429-966" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "pool-1-thread-8" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "pool-1-thread-13" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "pool-1-thread-12" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "pool-1-thread-11" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "pool-1-thread-1" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "pool-1-thread-3" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "qtp184565796-1023" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "qtp1834748429-962" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "qtp851486649-779" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "qtp1459091505-28" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "qtp1613009598-14753" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "pool-1-thread-18" java.lang.OutOfMemoryError: GC overhead limit exceeded
... View more
04-07-2017
04:40 PM
We could install Zeppelin through Ambari, and it runs with green light. However, when I open and try %sql (I guess Spark interpereter), it got error. When I check the zeppelin log file, I see the out of memory error with -XX:MaxPermSize=512m. I know this parameter is untill jdk 7. The jdk 8 has a different new parameter. I'm certain we need to use ZEPPELIN_MEM to set the new parameter, but I don't know how to do it correctly through Ambari. Could anyone help on this?
... View more
Labels:
- Labels:
-
Apache Zeppelin
03-02-2017
08:35 PM
1 Kudo
Thank you for the answer; however, both are not for levelDB, which is used in node manager. Do you have any idea to initialize levelDB. I try to find it, but i can't find any good article.
... View more
02-11-2017
02:42 AM
Yes, I use Ambari. I have one more question to clear my head. Assume I have nodes that have at least 20GB memory and 4 cores for each node, and each node has node manager installed. If I add one old computer that has 5GB memory and 2 cores, and if I install node manager on this new computer, Yarn memory setting for one node allocation should be reduced at max 5GB? This would be because Yarn configuration is made against all node managers. I am asking because in my cluster, the Ambari Yarn's memory setting, especially for node, the max memory size is the lowest of all nodes that has node manager installed. In other words, if we want to utilize resources, we should add nodes that have similar memory size and number of cores? Thanks you,
... View more
02-10-2017
11:24 PM
I understand Yarn and HDFS are separately defined. One thing that confuses me is that when I see Yarn's resource manager UI, it shows max memory size and max vcore, I guess, which is not an actual figure from data node servers but from the Yarn configuration? If the yarn has not actual configuration information, how can we know the accurate information of the current processing? I also wonder how resource manager can allocate resource correctly. (sorry about too many questions...) Thank you,
... View more
02-10-2017
10:29 PM
When we need to have more resources, which means typically, at the time when we add more data node servers? This makes sense to me. What I'm not so clear is let's say without adding anything with my example 5 servers setup, what does it mean to install node manager to all 5 servers? Even if we install more node managers, it does not mean we increase the resources because the number of data node servers stay 3. On the other hand, if we install node manager to all 5 servers, Yarn configuration may give us wrong resource information? Thank you,
... View more
02-10-2017
09:08 PM
I'm not so clear on the hadoop setup. I understand how it works, but when I see the Yarn configuration, I come to one question. Let's say we have 5 servers and 3 out of 5 servers are used for data node server. In this case, how many node managers should be installed and executed? Should we have node manager on all 5 servers or on only 3 data node servers?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN