Member since
10-21-2015
59
Posts
31
Kudos Received
16
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3095 | 03-09-2018 06:33 PM | |
2786 | 02-05-2018 06:52 PM | |
13975 | 02-05-2018 06:41 PM | |
4396 | 11-30-2017 06:46 PM | |
1666 | 11-22-2017 06:20 PM |
07-29-2021
12:21 AM
i just suffered from this. you should change the parameter in hdfs-site.xml <property>
<name>dfs.block.invalidate.limit</name>
<value>50000</value>
</property> the default value is 1000 , which is too slow may be you should also change report size, if you have exception about that <property>
<name>ipc.maximum.data.length</name>
<value>1073741824</value>
</property>
... View more
02-26-2021
02:15 AM
Agreed, but is there a way to avoid this wastage. apart from migrating data to LFS and then again to HDFS. Example: We have a 500MB file with block size 128 MB i.e. 4 blocks on HDFS. Now since we changed block size to 256MB, how would we make the file on HDFS to have 2 blocks of 256MB instead of 4. Please suggest.
... View more
01-06-2020
08:57 AM
@aengineer Also noticed from one comment in HDFS-8789 that "balancer doesn't support anything other than the default placement policy (BlockPlacementPolicyDefault)." HDFS-14053 says ability for NN to re-replicate blocks based on policy change is fixed in hadoop 3.3.0 [not sure if it's hadoop version or not, though NN version doesn't make sense], while HDFS-14637 supports above statement until UD get enable.
... View more
12-06-2017
10:04 AM
@Michael Bronson Brief of edits_inprogress__start transaction ID– This
is the current edit log in progress. All transactions starting fromare
in this file, and all new incoming transactions will get appended to
this file. HDFS pre-allocates space in this file in 1 MB chunks for
efficiency, and then fills it with incoming transactions. You’ll
probably see this file’s size as a multiple of 1 MB. When HDFS finalizes
the log segment, it truncates the unused portion of the space that
doesn’t contain any transactions, so the finalized file’s space will
shrink down. . More details about these files and it's functionality can be found at: https://hortonworks.com/blog/hdfs-metadata-directories-explained/
... View more
12-22-2016
02:47 PM
I am not sure about this error at this moment. But check it in https://wiki.apache.org/hadoop/CouldOnlyBeReplicatedTo
... View more