With the help of the remarks by @Aaron Dossett I found a solution to this.
Knowing that Storm does not mark the hdfs file currently being written to, and the .addRotationAction not robust enough in extreme cases I turned to a low level solution.
HDFS can report the files on a path that are open for write:
hdfs fsck <storm_hdfs_state_output_path> -files -openforwrite
or alternatively you can just list only NON open files on a path:
hdfs fsck <storm_hdfs_state_output_path> -files
The output is quite verbose but you can use sed or awk to get closed/completed files from there.
(Java HDFS api has similar hooks, this is just for CLI level solution)