Support Questions
Find answers, ask questions, and share your expertise

MSCK REPAIR TABLE hangs when hdfs directories of the target table has more than certain number of sub-directories

New Contributor


I installed HDP-2.5.3 recently and have an issue with MSCK REPAIR TABLE. I tried to migrate data from another cluster and created an external hive table on it. The command was successfully done with until 170 data directories and it was even very quick like 3 seconds. However, when I tried it with 190 or more data directories, it was hanging somewhere until I killed it in a few hours.

I looked at hivemetastore.log and found it didn't proceed any more after having called 'get_partitions'.

I tested it with data with much more directories like thousands in another cluster where HDP-2.4.X is installed and it worked without any problem. And 'get_partitions_with_auth' was called instead of 'get_partitions'. I compared their configs one by one in ambari but don't see any difference.

Does anyone have any ideas?


New Contributor

For my personal case, it does not help


@PJ - Would it be possible to share the hive.log when you observed this?

New Contributor

I have the same problem too, migrating to HDP2.5 was fatal for our heavy msck repair table treatment (to make an partitionned external table usable) Hive.log is empty when it happens Without any other solution, I'll have to recreate my own version of the tool... Sad to come to that end.