Member since
02-04-2019
2
Posts
0
Kudos Received
0
Solutions
07-06-2021
10:26 AM
@diplompils It is not necessary that file is lost if you are getting the output as false for recoverLease command. Usually file can't be deleted until it has lease acquired and not explicitly deleted using rm command. You can try below- hdfs debug recoverLease -path <file> -retries 10 Or you may check - https://issues.apache.org/jira/browse/HDFS-8576
... View more
03-11-2019
07:21 PM
1 Kudo
The concerning bit is this part of the message: > The reported blocks 1 needs additional 1393 blocks to reach the threshold 0.9990 of total blocks 1396. This indicates that while your DataNodes have come up and began reporting in they are not finding any of their locally stored block files to send in as part of the reports. The NameNode is waiting for enough (99.9%) data to be available to users before it opens itself for full access, but its stuck in a never-ending loop because no DNs are reporting availability of those blocks. The overall number of blocks seem low, is this a test/demo setup? If yes, was the block data from the DNs ever wiped out or removed away as part of the upgrade/install attempts? Or perhaps were all DNs replaced in the test with new ones at any point? If the data is not of concern at this stage (and ONLY if so) can force your NameNode out of safemode manually via 'hdfs dfsadmin -safemode leave' command (as 'hdfs' user or any granted HDFS superuser). If you'd like to perform some more investigation on the blocks disappearance, checkout the DataNode logs where these blocks have resided in past.
... View more