Member since
11-22-2016
8
Posts
0
Kudos Received
0
Solutions
11-09-2017
03:45 PM
Thank you! I am using HDP-2.5.5.5. There is no option -format for hdfs oiv.
... View more
11-09-2017
03:03 PM
I did statistics on fs image by running -oiv with option "FileDistribution". I got the file with content like this. What is the "size" units? Bits? Bytes? Size NumFiles 0 90380 2097152 573886
... View more
Labels:
- Labels:
-
Apache Hadoop
10-27-2017
06:19 PM
Thank you! Yes, we have a lot of snapshots.
... View more
10-26-2017
07:41 PM
I feel this is not a case, since fsimage size is 19Gb for about 8,000,000 blocks - which is too much. I have updated the question with more info.
... View more
10-26-2017
03:59 PM
Hi, I am running HDP-2.3.4.0 and Ambari 2.2.0.0 The NameNode heap size is set to 24Gb where 22Gb are used. This is very strange since the total blocks number is 7,796,546. The fsimage size is 19Gb. Here is the jmap output for namenode JVM -bash-4.2$ /usr/java/jdk1.7.0_67/bin/jmap -histo 35350 | head #num #instances #bytes class name ---------------------------------------------- 1: 175738748 14270581184 [B 2: 193636970 13941861840 org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy 3: 193636970 10843670320 org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff 4: 15675953 2867785824 [Ljava.lang.Object; 5: 4 1761607776 [Lorg.apache.hadoop.util.LightWeightGSet$LinkedElement; 6: 7798931 748697376 org.apache.hadoop.hdfs.server.namenode.INodeFile 7: 7799784 499186176 org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguous According to the best practices, we could use about 9 or 10Gb heap size. But this is not the case. What could impact the heap size and how could I troubleshot this?
... View more
Labels:
- Labels:
-
Apache Hadoop