<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Hive Table Inserts are failing after certain number of inserts in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Hive-Table-Inserts-are-failing-after-certain-number-of/m-p/235345#M197159</link>
    <description>&lt;P&gt;I have some Hive inserts which are working absolutely fine on HDP 2.6.5 and Redhat 6.8. But the same inserts fail on HDP 2.6.5 and Redhat 7.6/7.5&lt;/P&gt;&lt;P&gt;To reproduce it, create a table (preferably ACID table), now after inserting 100+ rows (sometimes 129 rows, sometimes 130 rows), it starts failing with "could only be replicated to 0 nodes" error.&lt;/P&gt;&lt;P&gt;Surprisingly if i start the same inserts again, after some time, it again fails after same 100 odd inserts. I tried tuning Ambari properties for Hive/Tez/YARN using HDP guidelines but that did not help much. The data node replication error could be mis-leading and when i tried inserting into non-transaction tables, i got some other errors also. All those backtraces are pasted below:&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;1) Backtrace 1: could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and no node(s) are excluded in this operation.&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;INFO  : Dag name: INSERT INTO analyticsdb....15:17:50.417',46)(Stage-1)&lt;/P&gt;&lt;P&gt;ERROR : Failed to execute tez graph.&lt;/P&gt;&lt;P&gt;org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hive/hive/_tez_session_dir/e891a1be-8544-4bb4-a9f6-4a83a18094e2/.tez/application_1562919643063_0016/tez-conf.pb could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and no node(s) are excluded in this operation.&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1719)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3372)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3296)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:850)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:504)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;2) Hive Runtime Error while closing operators&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;INFO  : Dag name: INSERT INTO analyticsdb....15:17:50.417',46)(Stage-1)&lt;/P&gt;&lt;P&gt;INFO  : Status: Running (Executing on YARN cluster with App id application_1562919643063_0011)&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;--------------------------------------------------------------------------------&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;        VERTICES      STATUS  TOTAL  COMPLETED  RUNNING  PENDING  FAILED  KILLED&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;--------------------------------------------------------------------------------&lt;/P&gt;&lt;P&gt;Map 1                RUNNING      1          0        0        1       4       0&lt;/P&gt;&lt;P&gt;--------------------------------------------------------------------------------&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;VERTICES: 00/01  [&amp;gt;&amp;gt;--------------------------] 0%    ELAPSED TIME: 11.19 s    &lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;--------------------------------------------------------------------------------&lt;/P&gt;&lt;P&gt;ERROR : Status: Failed&lt;/P&gt;&lt;P&gt;ERROR : Vertex failed, vertexName=Map 1, vertexId=vertex_1562919643063_0011_200_00, diagnostics=[Task failed, taskId=task_1562919643063_0011_200_00_000000, diagnostics=[TaskAttempt 0 failed, info=[Error: Failure while running task:java.lang.RuntimeException: java.lang.RuntimeException: Hive Runtime Error while closing operators&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:173)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:139)&lt;/P&gt;&lt;P&gt;at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:347)&lt;/P&gt;&lt;P&gt;at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:194)&lt;/P&gt;&lt;P&gt;at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:185)&lt;/P&gt;&lt;P&gt;at java.security.AccessController.doPrivileged(Native Method)&lt;/P&gt;&lt;P&gt;at javax.security.auth.Subject.doAs(Subject.java:422)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)&lt;/P&gt;&lt;P&gt;at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:185)&lt;/P&gt;&lt;P&gt;at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:181)&lt;/P&gt;&lt;P&gt;at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)&lt;/P&gt;&lt;P&gt;at java.util.concurrent.FutureTask.run(FutureTask.java:266)&lt;/P&gt;&lt;P&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)&lt;/P&gt;&lt;P&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)&lt;/P&gt;&lt;P&gt;at java.lang.Thread.run(Thread.java:745)&lt;/P&gt;&lt;P&gt;Caused by: java.lang.RuntimeException: Hive Runtime Error while closing operators&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.close(MapRecordProcessor.java:370)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:164)&lt;/P&gt;&lt;P&gt;... 14 more&lt;/P&gt;&lt;P&gt;Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /apps/hive/warehouse/analyticsdb.db/tbl_accessareareaders_cdc/.hive-staging_hive_2019-07-12_09-57-54_555_2284217577217095088-3/_task_tmp.-ext-10000/_tmp.000000_0 could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and no node(s) are excluded in this operation.&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1719)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3372)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3296)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:850)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:504)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;3) Failed with exception Buffer underflow.&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;INFO  : Dag name: INSERT INTO analyticsdb....15:17:50.417',46)(Stage-1)&lt;/P&gt;&lt;P&gt;INFO  : Loading data to table analyticsdb.tbl_accessareareaders_cdc from hdfs://ip-172-31-17-232.ec2.internal:8020/apps/hive/warehouse/analyticsdb.db/tbl_accessareareaders_cdc/.hive-staging_hive_2019-07-12_09-27-38_852_1168222145805851570-3/-ext-10000&lt;/P&gt;&lt;P&gt;INFO  : [Warning] could not update stats.Failed with exception Buffer underflow.&lt;/P&gt;&lt;P&gt;org.apache.hive.com.esotericsoftware.kryo.KryoException: Buffer underflow.&lt;/P&gt;&lt;P&gt;at org.apache.hive.com.esotericsoftware.kryo.io.Input.require(Input.java:181)&lt;/P&gt;&lt;P&gt;at org.apache.hive.com.esotericsoftware.kryo.io.Input.readVarInt(Input.java:355)&lt;/P&gt;&lt;P&gt;at org.apache.hive.com.esotericsoftware.kryo.Kryo.readReferenceOrNull(Kryo.java:809)&lt;/P&gt;&lt;P&gt;at org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:670)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.hive.ql.stats.fs.FSStatsAggregator.connect(FSStatsAggregator.java:66)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.hive.ql.exec.StatsTask.createStatsAggregator(StatsTask.java:318)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.hive.ql.exec.StatsTask.aggregateStats(StatsTask.java:149)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.hive.ql.exec.StatsTask.execute(StatsTask.java:122)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:162)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:89)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1765)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1506)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1303)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1170)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1165)&lt;/P&gt;&lt;P&gt;at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:197)&lt;/P&gt;&lt;P&gt;at org.apache.hive.service.cli.operation.SQLOperation.access$300(SQLOperation.java:76)&lt;/P&gt;&lt;P&gt;at org.apache.hive.service.cli.operation.SQLOperation$2$1.run(SQLOperation.java:255)&lt;/P&gt;&lt;P&gt;at java.security.AccessController.doPrivileged(Native Method)&lt;/P&gt;&lt;P&gt;at javax.security.auth.Subject.doAs(Subject.java:422)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)&lt;/P&gt;&lt;P&gt;at org.apache.hive.service.cli.operation.SQLOperation$2.run(SQLOperation.java:266)&lt;/P&gt;&lt;P&gt;at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)&lt;/P&gt;&lt;P&gt;at java.util.concurrent.FutureTask.run(FutureTask.java:266)&lt;/P&gt;&lt;P&gt;at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)&lt;/P&gt;&lt;P&gt;at java.util.concurrent.FutureTask.run(FutureTask.java:266)&lt;/P&gt;&lt;P&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)&lt;/P&gt;&lt;P&gt; &lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;</description>
    <pubDate>Fri, 12 Jul 2019 22:10:19 GMT</pubDate>
    <dc:creator>sujoy_neogi</dc:creator>
    <dc:date>2019-07-12T22:10:19Z</dc:date>
  </channel>
</rss>

