We have 3 data nodes in a production cluster and cluster capacity is 3 TB. whenever i tried to run a terasort job for 500 GB data, one of the node manager stopped working.
Pro: cheap. Con: not scalable. I know... you already knew that! ;-) Also, I'm not trying to be a smart @$$, but I do mean what I say at https://twitter.com/LesterMartinATL/status/504004795557236736 about these tiny clusters (yes, 3 worker nodes is a tiny cluster).
A 3 node cluster might make sense for some problems, but you'll never be able to do anything at scale on it and it surely won't be performance-oriented for anything at the edge of what it can process.
Regarding your 3TB capacity; does that mean each node has 3TB of capacity dedicated to itself (raw 9TB, but effective 3TB with replication factor of 3), or that each box has 1TB? I'm asking because for things like Terasort, we have to also consider the job's intermediate data (i.e. the info moving from the mappers to the reducers which is this case will be 500GB itself) as well as the final output back on HDFS; yes, another 500GB. The intermediary data isn't stored on HDFS, but if the input and output are then that 500GB is now 1TB (in + out) and will really be 3TB all by themselves if both of these are set with replication factor of 3. Even with replication factor of 1, this all smells problematic to me on this small cluster.
If you only have 1TB of disk space on each node, then surely this will never run as just mentioned. Even if the space wasn't an issue, you'll need to run something like 3900 mappers just to process that (if my math is right of dividing 500GB by 128MB block size) plus a shed-load of reducers and that would take forever on three nodes.
It has been many years since I was regularly running Terasort, but a very old heuristic I would use was a max of 30 minutes for a 10 worker node cluster comprised of boxes that had 128GB of ram and 10-12 disks.
Clearly there are many variables at hand and I'm not sure looking at your specific output would immediately shed light on the exact problem, but what I would recommend is to start small and scale up. Run a 500MB gen and sort. Then double that to 1GB and then double to 2GB, 4GB, and so on to make sure your specific cluster's results make sense compared with the last run and I believe you'll see a good pattern. Eventually this will all run out of horsepower (aka nodes), but will give you a better benchmark.
That is, that last good sized run will give you a number that should be cut in half when you double the size of the worker nodes! Good luck and happy Hadooping!