Not sure whether we have a single command to get the quotas for all the directory. But I would try to get all the HDFS directories and iterate it through a shell script which get the directories list from HDFS and append it in a file or we could we even print it on the screen also.
hadoop fs -ls -R / would get the list of directories and its sub directories. Save it in a file and read it line by line using shell commands and pass it as a variable to hadoop fs -count -v -q $linefrompreviouscommand. This would work.
Yep, this could work, but for a big cluster I could imagine this being time-consuming. The initial recursive listing (especially since it will represent down to the file level) could be quite large for any file system of any size. The more time-consuming effort would be to run the "hdfs dfs -count" command over and over and over. But... like you said, this should work. Preferably, I'd want the NN to just offer a "show me all quoto details" or at least just "show me directories w/quotas". Since this function is not present, Maybe there is a performance hit for NN to quickly determine this that I'm not considering as seems lightweight to me. Thanks for your suggestion.