<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: I am getting outofmemory while inserting the data into table,try increasing java heap but it wont help in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/I-am-getting-outofmemory-while-inserting-the-data-into-table/m-p/119684#M30708</link>
    <description>&lt;P&gt;If you say that increasing the heap doesn't help are we talking about decent sizes like 8GB+? Also did you increase the java opts AND the container size?&lt;/P&gt;&lt;P&gt;set hive.tez.java.opts="-Xmx3400m";  &lt;/P&gt;&lt;P&gt;set hive.tez.container.size =
4096; &lt;/P&gt;&lt;P&gt;If yes then you most likely have a different problem like for example loading data into a partitioned table. ORC writers keep one buffer open for every output file. So if you load badly to a partitioned table they will keep a lot of memory open. There are ways around it like optimized sorted load or the distribute by keyword.&lt;/P&gt;&lt;P&gt;&lt;A href="http://www.slideshare.net/BenjaminLeonhardi/hive-loading-data" target="_blank"&gt;http://www.slideshare.net/BenjaminLeonhardi/hive-loading-data&lt;/A&gt;&lt;/P&gt;&lt;P&gt;If however you use significantly less than 4-8GB for the task then you should increase that.&lt;/P&gt;</description>
    <pubDate>Mon, 06 Jun 2016 16:49:47 GMT</pubDate>
    <dc:creator>bleonhardi</dc:creator>
    <dc:date>2016-06-06T16:49:47Z</dc:date>
    <item>
      <title>I am getting outofmemory while inserting the data into table,try increasing java heap but it wont help</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/I-am-getting-outofmemory-while-inserting-the-data-into-table/m-p/119682#M30706</link>
      <description>&lt;P&gt;FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.&amp;lt;init&amp;gt;(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
at org.apache.hadoop.hive.ql.io.orc.OutStream.getNewInputBuffer(OutStream.java:107)
at org.apache.hadoop.hive.ql.io.orc.OutStream.spill(OutStream.java:223)
at org.apache.hadoop.hive.ql.io.orc.OutStream.flush(OutStream.java:239)
at org.apache.hadoop.hive.ql.io.orc.RunLengthByteWriter.flush(RunLengthByteWriter.java:58)
at org.apache.hadoop.hive.ql.io.orc.BitFieldWriter.flush(BitFieldWriter.java:44)
at org.apache.hadoop.hive.ql.io.orc.WriterImpl$TreeWriter.writeStripe(WriterImpl.java:553)
at org.apache.hadoop.hive.ql.io.orc.WriterImpl$StringTreeWriter.writeStripe(WriterImpl.java:1012)
at org.apache.hadoop.hive.ql.io.orc.WriterImpl$StructTreeWriter.writeStripe(WriterImpl.java:1400)
at org.apache.hadoop.hive.ql.io.orc.WriterImpl.flushStripe(WriterImpl.java:1780)
at org.apache.hadoop.hive.ql.io.orc.WriterImpl.close(WriterImpl.java:2040)
at org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat$OrcRecordWriter.close(OrcOutputFormat.java:106)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.closeWriters(FileSinkOperator.java:165)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:843)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:577)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:588)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:588)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:588)
at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:227)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)&lt;/P&gt;</description>
      <pubDate>Fri, 03 Jun 2016 23:24:18 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/I-am-getting-outofmemory-while-inserting-the-data-into-table/m-p/119682#M30706</guid>
      <dc:creator>navitkumar_sing</dc:creator>
      <dc:date>2016-06-03T23:24:18Z</dc:date>
    </item>
    <item>
      <title>Re: I am getting outofmemory while inserting the data into table,try increasing java heap but it wont help</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/I-am-getting-outofmemory-while-inserting-the-data-into-table/m-p/119683#M30707</link>
      <description>&lt;P&gt;this is due to the memory required by orc writer while writing orc files, you can limit the memory use by tweaking the value of orc.compress.size which is of 256KB by default.I am not sure about your heap size, start testing with 8KB of buffer using &lt;/P&gt;&lt;P&gt;alter table table_name set tblproperties("orc.compress.size"="8192")&lt;/P&gt;&lt;P&gt;and see if it helps.&lt;/P&gt;</description>
      <pubDate>Fri, 03 Jun 2016 23:34:14 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/I-am-getting-outofmemory-while-inserting-the-data-into-table/m-p/119683#M30707</guid>
      <dc:creator>rajkumar_singh</dc:creator>
      <dc:date>2016-06-03T23:34:14Z</dc:date>
    </item>
    <item>
      <title>Re: I am getting outofmemory while inserting the data into table,try increasing java heap but it wont help</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/I-am-getting-outofmemory-while-inserting-the-data-into-table/m-p/119684#M30708</link>
      <description>&lt;P&gt;If you say that increasing the heap doesn't help are we talking about decent sizes like 8GB+? Also did you increase the java opts AND the container size?&lt;/P&gt;&lt;P&gt;set hive.tez.java.opts="-Xmx3400m";  &lt;/P&gt;&lt;P&gt;set hive.tez.container.size =
4096; &lt;/P&gt;&lt;P&gt;If yes then you most likely have a different problem like for example loading data into a partitioned table. ORC writers keep one buffer open for every output file. So if you load badly to a partitioned table they will keep a lot of memory open. There are ways around it like optimized sorted load or the distribute by keyword.&lt;/P&gt;&lt;P&gt;&lt;A href="http://www.slideshare.net/BenjaminLeonhardi/hive-loading-data" target="_blank"&gt;http://www.slideshare.net/BenjaminLeonhardi/hive-loading-data&lt;/A&gt;&lt;/P&gt;&lt;P&gt;If however you use significantly less than 4-8GB for the task then you should increase that.&lt;/P&gt;</description>
      <pubDate>Mon, 06 Jun 2016 16:49:47 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/I-am-getting-outofmemory-while-inserting-the-data-into-table/m-p/119684#M30708</guid>
      <dc:creator>bleonhardi</dc:creator>
      <dc:date>2016-06-06T16:49:47Z</dc:date>
    </item>
  </channel>
</rss>

