One of the disks on one of my data nodes was failing, so I replaced it with the instructions:
1. Stop all services on datanode.
2. shut down the machine.
3. replace the disk
4. power on the machine.
5. mount the disk onto its data point.
6. start all services on HDFS.
Now, I get an alert "Pending Deletion Blocks:" on Ambari. Did I do something wrong? I can I revert it?
This is a strange problem. I would check the following things.
1. Check if anyone has done a massive delete operation?
2. Did you mount a disk with data already -- that is a set of data blocks pull out from some other machine/cluster?
3. Please keep the cluster in safe mode, till you puzzle out what happened.
I wanted to know how this got resolved? by itself? Actually i dropped a huge table yesterday and today morning i did some config changes and restarted services, after restart i get an alert on ambari saying hdfs block deletion pending... its been more than 30 min, ithe alert is still there on one of the namenode in HA.
<property> <name>dfs.block.invalidate.limit</name> <value>50000</value> </property>
the default value is 1000 , which is too slow
may be you should also change report size, if you have exception about that
<property> <name>ipc.maximum.data.length</name> <value>1073741824</value> </property>