<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Spark Thrift Server goes down/runs into OOM after some time when running large jobs from Tableau or beeline in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Thrift-Server-goes-down-runs-into-OOM-after-some-time/m-p/98296#M11773</link>
    <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/1013/s7ugas.html" nodeid="1013"&gt;@Gagan Singh&lt;/A&gt; are you still having this issue? Can you post your solution? Otherwise please accept the answer to close out the thread.&lt;/P&gt;</description>
    <pubDate>Tue, 02 Feb 2016 09:50:00 GMT</pubDate>
    <dc:creator>aervits</dc:creator>
    <dc:date>2016-02-02T09:50:00Z</dc:date>
    <item>
      <title>Spark Thrift Server goes down/runs into OOM after some time when running large jobs from Tableau or beeline</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Thrift-Server-goes-down-runs-into-OOM-after-some-time/m-p/98293#M11770</link>
      <description>&lt;P&gt;Thrift server started as: 
/usr/hdp/current/spark-thriftserver/sbin/&lt;A href="http://start-thriftserver.sh/"&gt;start-thriftserver.sh&lt;/A&gt; --master yarn-client --executor-memory 20G --num-executors 20 --executor-cores 12 --hiveconf hive.server2.thrift.port=10001 &lt;/P&gt;&lt;P&gt;1. Cached data in spark memory by using 
  Cache table bo_5years &lt;/P&gt;&lt;P&gt;
2. ran select * from bo_5years from beeline  &lt;/P&gt;&lt;PRE&gt;Error in logs: 
15/12/04 16:03:54 WARN DefaultChannelPipeline: An exception was thrown by a user handler while handling an exception event ([id: 0x5e66418a, /10.105.167.206:53903 =&amp;gt; /10.105.164.205:60270] EXCEPTION: java.lang.OutOfMemoryError: Java heap 
space) 
java.lang.OutOfMemoryError: Java heap space 
        at java.lang.Object.clone(Native Method) 
        at akka.util.CompactByteString$.apply(ByteString.scala:410) 
        at akka.util.ByteString$.apply(ByteString.scala:22) 
        at akka.remote.transport.netty.TcpHandlers$class.onMessage(TcpSupport.scala:45) 
        at akka.remote.transport.netty.TcpServerHandler.onMessage(TcpSupport.scala:57) 
        at akka.remote.transport.netty.NettyServerHelpers$class.messageReceived(NettyHelpers.scala:43) 
        at akka.remote.transport.netty.ServerHandler.messageReceived(NettyTransport.scala:180) 
        at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) 
        at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) 
        at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) 
        at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:310) 
        at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) 
        at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) 
        at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) 
        at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) 
        at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318) 
        at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) 
        at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) 
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
        at java.lang.Thread.run(Thread.java:745) 
15/12/04 16:03:56 ERROR ErrorMonitor: Uncaught fatal error from thread [sparkDriver-akka.remote.default-remote-dispatcher-7] shutting down ActorSystem [sparkDriver] 
java.lang.OutOfMemoryError: Java heap space 
        at org.spark_project.protobuf.ByteString.copyFrom(ByteString.java:192) 
        at org.spark_project.protobuf.CodedInputStream.readBytes(CodedInputStream.java:324) 
        at akka.remote.WireFormats$SerializedMessage.&amp;lt;init&amp;gt;(WireFormats.java:3030) 
        at akka.remote.WireFormats$SerializedMessage.&amp;lt;init&amp;gt;(WireFormats.java:2980) 
        at akka.remote.WireFormats$SerializedMessage$1.parsePartialFrom(WireFormats.java:3073) 
        at akka.remote.WireFormats$SerializedMessage$1.parsePartialFrom(WireFormats.java:3068) 
        at org.spark_project.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309) 
        at akka.remote.WireFormats$RemoteEnvelope.&amp;lt;init&amp;gt;(WireFormats.java:993) 
        at akka.remote.WireFormats$RemoteEnvelope.&amp;lt;init&amp;gt;(WireFormats.java:927) 
        at akka.remote.WireFormats$RemoteEnvelope$1.parsePartialFrom(WireFormats.java:1049) 
        at akka.remote.WireFormats$RemoteEnvelope$1.parsePartialFrom(WireFormats.java:1044) 
        at org.spark_project.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309) 
        at akka.remote.WireFormats$AckAndEnvelopeContainer.&amp;lt;init&amp;gt;(WireFormats.java:241) 
        at akka.remote.WireFormats$AckAndEnvelopeContainer.&amp;lt;init&amp;gt;(WireFormats.java:175) 
        at akka.remote.WireFormats$AckAndEnvelopeContainer$1.parsePartialFrom(WireFormats.java:279) 
        at akka.remote.WireFormats$AckAndEnvelopeContainer$1.parsePartialFrom(WireFormats.java:274) 
        at org.spark_project.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:141) 
        at org.spark_project.protobuf.AbstractParser.parseFrom(AbstractParser.java:176) 
        at org.spark_project.protobuf.AbstractParser.parseFrom(AbstractParser.java:188) 
        at org.spark_project.protobuf.AbstractParser.parseFrom(AbstractParser.java:193) 
        at org.spark_project.protobuf.AbstractParser.parseFrom(AbstractParser.java:49) 
        at akka.remote.WireFormats$AckAndEnvelopeContainer.parseFrom(WireFormats.java:409) 
        at akka.remote.transport.AkkaPduProtobufCodec$.decodeMessage(AkkaPduCodec.scala:181) 
        at akka.remote.EndpointReader.akka$remote$EndpointReader$$tryDecodeMessageAndAck(Endpoint.scala:995) 
        at akka.remote.EndpointReader$$anonfun$receive$2.applyOrElse(Endpoint.scala:928) 
        at akka.actor.Actor$class.aroundReceive(Actor.scala:465) 
        at akka.remote.EndpointActor.aroundReceive(Endpoint.scala:415) 
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516) 
        at akka.actor.ActorCell.invoke(ActorCell.scala:487)&lt;/PRE&gt;</description>
      <pubDate>Thu, 10 Dec 2015 00:39:10 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Thrift-Server-goes-down-runs-into-OOM-after-some-time/m-p/98293#M11770</guid>
      <dc:creator>s7ugas</dc:creator>
      <dc:date>2015-12-10T00:39:10Z</dc:date>
    </item>
    <item>
      <title>Re: Spark Thrift Server goes down/runs into OOM after some time when running large jobs from Tableau or beeline</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Thrift-Server-goes-down-runs-into-OOM-after-some-time/m-p/98294#M11771</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/1013/s7ugas.html" nodeid="1013"&gt;@Gagan Singh&lt;/A&gt; You may want to take a look into this &lt;A href="http://stackoverflow.com/questions/21138751/spark-java-lang-outofmemoryerror-java-heap-space"&gt;http://stackoverflow.com/questions/21138751/spark-...&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 10 Dec 2015 00:46:28 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Thrift-Server-goes-down-runs-into-OOM-after-some-time/m-p/98294#M11771</guid>
      <dc:creator>nsabharwal</dc:creator>
      <dc:date>2015-12-10T00:46:28Z</dc:date>
    </item>
    <item>
      <title>Re: Spark Thrift Server goes down/runs into OOM after some time when running large jobs from Tableau or beeline</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Thrift-Server-goes-down-runs-into-OOM-after-some-time/m-p/98295#M11772</link>
      <description>&lt;P&gt;Thanks Neeraj, for the post. We do cache data in the use case above so certain percentage would be needed for spark persistence. Tried the executor with 40G memory as well and ran into the issue.&lt;/P&gt;</description>
      <pubDate>Thu, 10 Dec 2015 03:11:02 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Thrift-Server-goes-down-runs-into-OOM-after-some-time/m-p/98295#M11772</guid>
      <dc:creator>s7ugas</dc:creator>
      <dc:date>2015-12-10T03:11:02Z</dc:date>
    </item>
    <item>
      <title>Re: Spark Thrift Server goes down/runs into OOM after some time when running large jobs from Tableau or beeline</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Thrift-Server-goes-down-runs-into-OOM-after-some-time/m-p/98296#M11773</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/1013/s7ugas.html" nodeid="1013"&gt;@Gagan Singh&lt;/A&gt; are you still having this issue? Can you post your solution? Otherwise please accept the answer to close out the thread.&lt;/P&gt;</description>
      <pubDate>Tue, 02 Feb 2016 09:50:00 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Thrift-Server-goes-down-runs-into-OOM-after-some-time/m-p/98296#M11773</guid>
      <dc:creator>aervits</dc:creator>
      <dc:date>2016-02-02T09:50:00Z</dc:date>
    </item>
    <item>
      <title>Re: Spark Thrift Server goes down/runs into OOM after some time when running large jobs from Tableau or beeline</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Thrift-Server-goes-down-runs-into-OOM-after-some-time/m-p/98297#M11774</link>
      <description>&lt;P&gt;This got resolved by removing the executor-cores argument passed while starting thrift server. Memory and no of executors can be increased/decreased based on data volume.&lt;/P&gt;</description>
      <pubDate>Tue, 02 Feb 2016 11:21:27 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Thrift-Server-goes-down-runs-into-OOM-after-some-time/m-p/98297#M11774</guid>
      <dc:creator>s7ugas</dc:creator>
      <dc:date>2016-02-02T11:21:27Z</dc:date>
    </item>
  </channel>
</rss>

