<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: second replica is not found while writing a simple file to HDFS in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/second-replica-is-not-found-while-writing-a-simple-file-to/m-p/369839#M240571</link>
    <description>&lt;P&gt;Exception in thread "main" org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /home/maria_dev/read_write_hdfs_example.txt could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This is the exception from the client side. so it is clearly ignoring the datanode because of some issue.&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/80393"&gt;@rki_&lt;/a&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Tue, 02 May 2023 11:43:03 GMT</pubDate>
    <dc:creator>iamlazycoder</dc:creator>
    <dc:date>2023-05-02T11:43:03Z</dc:date>
    <item>
      <title>second replica is not found while writing a simple file to HDFS</title>
      <link>https://community.cloudera.com/t5/Support-Questions/second-replica-is-not-found-while-writing-a-simple-file-to/m-p/369684#M240531</link>
      <description>&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;I am trying to load a simple file to HDP Hadoop cluster using HDFS client and I got the following exception.&amp;nbsp;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;Exception in thread "main" org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /home/maria_dev/read_write_hdfs_example.txt could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation.&lt;BR /&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;I looked into the namenode logs and enabled debug log level for NetworkTopology and&amp;nbsp;BlockPlacementPolicy components. After enabling logs, I found that the data node 172.18.0.2:50010 is&amp;nbsp; being excluded and since I am running only one datanode, it is unable to find second replica.&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;2023-04-28 06:00:22,188 DEBUG net.NetworkTopology (NetworkTopology.java:chooseRandom(780)) - Choosing random from 1 available nodes on node /default-&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;rack, scope=/default-rack, excludedScope=null, excludeNodes=[] &lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;2023-04-28 06:00:22,188 DEBUG net.NetworkTopology (NetworkTopology.java:chooseRandom(796)) - chooseRandom returning 172.18.0.2:50010 &lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;2023-04-28 06:00:22,189 INFO hdfs.StateChange (FSNamesystem.java:logAllocatedBlock(3866)) - BLOCK* allocate blk_1073743107_2310, replicas=172.18.0.2&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;:50010 for /home/maria_dev/read_write_hdfs_example.txt &lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;2023-04-28 06:00:24,972 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1653)) - BLOCK* neededReplications = 0, pendingRepl&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;ications = 0. &lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;2023-04-28 06:00:24,986 INFO destination.HDFSAuditDestination (HDFSAuditDestination.java:logJSON(179)) - Flushing HDFS audit. Event Size:2 &lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;2023-04-28 06:00:25,060 INFO hdfs.StateChange (FSNamesystem.java:completeFile(3759)) - DIR* completeFile: /spark2-history/.e3751543-0a05-4c2b-af27-f&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;3a7c02666b2 is closed by DFSClient_NONMAPREDUCE_-282543677_1 &lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;2023-04-28 06:00:27,974 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1653)) - BLOCK* neededReplications = 0, pendingRepl&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;ications = 0. &lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;2023-04-28 06:00:27,988 INFO provider.BaseAuditHandler (BaseAuditHandler.java:logStatus(310)) - Audit Status Log: name=hdfs.async.batch.hdfs, interv&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;al=01:00.313 minutes, events=23, succcessCount=23, totalEvents=878, totalSuccessCount=876, totalDeferredCount=2 &lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;2023-04-28 06:00:27,988 INFO destination.HDFSAuditDestination (HDFSAuditDestination.java:logJSON(179)) - Flushing HDFS audit. Event Size:3 &lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;2023-04-28 06:00:30,975 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1653)) - BLOCK* neededReplications = 0, pendingRepl&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;ications = 0. &lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;2023-04-28 06:00:33,976 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1653)) - BLOCK* neededReplications = 0, pendingRepl&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;ications = 0. &lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;2023-04-28 06:00:35,073 INFO hdfs.StateChange (FSNamesystem.java:completeFile(3759)) - DIR* completeFile: /spark2-history/.9aa45e6b-dff7-4f6d-b25d-a&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;662930c6797 is closed by DFSClient_NONMAPREDUCE_-282543677_1 &lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;2023-04-28 06:00:36,976 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1653)) - BLOCK* neededReplications = 0, pendingRepl&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;ications = 0. &lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;2023-04-28 06:00:36,989 INFO destination.HDFSAuditDestination (HDFSAuditDestination.java:logJSON(179)) - Flushing HDFS audit. Event Size:3 &lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;2023-04-28 06:00:39,977 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1653)) - BLOCK* neededReplications = 0, pendingRepl&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;ications = 0. &lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;2023-04-28 06:00:42,977 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1653)) - BLOCK* neededReplications = 0, pendingRepl&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;ications = 0. &lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;2023-04-28 06:00:43,251 DEBUG net.NetworkTopology (NetworkTopology.java:chooseRandom(780)) - Choosing random from 0 available nodes on node /default-&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;rack, scope=/default-rack, excludedScope=null, excludeNodes=[172.18.0.2:50010] &lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;2023-04-28 06:00:43,251 DEBUG net.NetworkTopology (NetworkTopology.java:chooseRandom(796)) - chooseRandom returning null &lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;2023-04-28 06:00:43,251 DEBUG blockmanagement.BlockPlacementPolicy (BlockPlacementPolicyDefault.java:chooseLocalRack(547)) - Failed to choose from lo&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;cal rack (location = /default-rack); the second replica is not found, retry choosing ramdomly &lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy$NotEnoughReplicasException: &lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:701) &lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:622) &lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:529) &lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalStorage(BlockPlacementPolicyDefault.java:489)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class="scrollback"&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;Please help troubleshooting the issue further.&lt;/SPAN&gt;&lt;/DIV&gt;</description>
      <pubDate>Fri, 28 Apr 2023 06:11:16 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/second-replica-is-not-found-while-writing-a-simple-file-to/m-p/369684#M240531</guid>
      <dc:creator>iamlazycoder</dc:creator>
      <dc:date>2023-04-28T06:11:16Z</dc:date>
    </item>
    <item>
      <title>Re: second replica is not found while writing a simple file to HDFS</title>
      <link>https://community.cloudera.com/t5/Support-Questions/second-replica-is-not-found-while-writing-a-simple-file-to/m-p/369740#M240545</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/102565"&gt;@iamlazycoder&lt;/a&gt;&amp;nbsp;As you have only a single Datanode, the blockplacement policy won't allow putting the second replica on the same Datanode.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;You can try to put the file with a single replica and check.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;hdfs dfs -Ddfs.replication=1 -put /path/to/&lt;SPAN class="hljs-built_in"&gt;local&lt;/SPAN&gt;/file /path/to/hdfs/dir&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Or you can change the&amp;nbsp;dfs.replication&lt;SPAN&gt;&amp;nbsp;in hdfs-site.xml to 1 at a cluster level.&lt;/SPAN&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 28 Apr 2023 18:53:52 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/second-replica-is-not-found-while-writing-a-simple-file-to/m-p/369740#M240545</guid>
      <dc:creator>rki_</dc:creator>
      <dc:date>2023-04-28T18:53:52Z</dc:date>
    </item>
    <item>
      <title>Re: second replica is not found while writing a simple file to HDFS</title>
      <link>https://community.cloudera.com/t5/Support-Questions/second-replica-is-not-found-while-writing-a-simple-file-to/m-p/369750#M240551</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/80393"&gt;@rki_&lt;/a&gt;&amp;nbsp;I already have the&amp;nbsp;&lt;SPAN&gt;dfs.replication property as 1 in hdfs-site.xml, I can understand from the logs first it finds the data node&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class="ansiDef bgAnsiDef"&gt;172.18.0.2:50010 and tries to allocate block for the write operation. Then why it tries to find the second replica when the dfs.replication property is 1&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Sat, 29 Apr 2023 06:31:04 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/second-replica-is-not-found-while-writing-a-simple-file-to/m-p/369750#M240551</guid>
      <dc:creator>iamlazycoder</dc:creator>
      <dc:date>2023-04-29T06:31:04Z</dc:date>
    </item>
    <item>
      <title>Re: second replica is not found while writing a simple file to HDFS</title>
      <link>https://community.cloudera.com/t5/Support-Questions/second-replica-is-not-found-while-writing-a-simple-file-to/m-p/369764#M240556</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/102565"&gt;@iamlazycoder&lt;/a&gt;&amp;nbsp;Have you tried putting the file with&amp;nbsp;-Ddfs.replication=1 ?&lt;/P&gt;</description>
      <pubDate>Mon, 01 May 2023 09:49:34 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/second-replica-is-not-found-while-writing-a-simple-file-to/m-p/369764#M240556</guid>
      <dc:creator>rki_</dc:creator>
      <dc:date>2023-05-01T09:49:34Z</dc:date>
    </item>
    <item>
      <title>Re: second replica is not found while writing a simple file to HDFS</title>
      <link>https://community.cloudera.com/t5/Support-Questions/second-replica-is-not-found-while-writing-a-simple-file-to/m-p/369833#M240569</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/80393"&gt;@rki_&lt;/a&gt;&amp;nbsp;Yes I tried setting&amp;nbsp;&lt;SPAN&gt;dfs.replication=1 while writing the file to HDFS as well.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 02 May 2023 10:30:10 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/second-replica-is-not-found-while-writing-a-simple-file-to/m-p/369833#M240569</guid>
      <dc:creator>iamlazycoder</dc:creator>
      <dc:date>2023-05-02T10:30:10Z</dc:date>
    </item>
    <item>
      <title>Re: second replica is not found while writing a simple file to HDFS</title>
      <link>https://community.cloudera.com/t5/Support-Questions/second-replica-is-not-found-while-writing-a-simple-file-to/m-p/369839#M240571</link>
      <description>&lt;P&gt;Exception in thread "main" org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /home/maria_dev/read_write_hdfs_example.txt could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This is the exception from the client side. so it is clearly ignoring the datanode because of some issue.&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/80393"&gt;@rki_&lt;/a&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 02 May 2023 11:43:03 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/second-replica-is-not-found-while-writing-a-simple-file-to/m-p/369839#M240571</guid>
      <dc:creator>iamlazycoder</dc:creator>
      <dc:date>2023-05-02T11:43:03Z</dc:date>
    </item>
  </channel>
</rss>

