<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: HDFS command hdfs dfs -ls throws fatal internal error java.lang.ArrayIndexOutOfBoundsException: in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/HDFS-command-hdfs-dfs-ls-throws-fatal-internal-error-java/m-p/60970#M55619</link>
    <description>&lt;P&gt;I reproduced the error by intentionally corrupt the _index file.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If you meant "restore" by unarchiving the har file with hdfs dfs -cp command, I find it returns the same AIOOBE, so you won't be able to unarchive it.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Your best bet is to download the _index file, manually repair it, replace the _index file, and see how it goes.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Meanwhile, I filed an Apache jira&amp;nbsp;&lt;A href="https://issues.apache.org/jira/browse/HADOOP-14950" target="_blank" rel="13109657"&gt;HADOOP-14950&lt;/A&gt;&amp;nbsp;to handle the AIOOBE better, but it wouldn't help fix your corrupt _index file.&lt;/P&gt;</description>
    <pubDate>Mon, 16 Oct 2017 17:18:31 GMT</pubDate>
    <dc:creator>weichiu</dc:creator>
    <dc:date>2017-10-16T17:18:31Z</dc:date>
  </channel>
</rss>

