Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

MSCK REPAIR TABLE hangs when hdfs directories of the target table has more than certain number of sub-directories

avatar
New Contributor

Hello,

I installed HDP-2.5.3 recently and have an issue with MSCK REPAIR TABLE. I tried to migrate data from another cluster and created an external hive table on it. The command was successfully done with until 170 data directories and it was even very quick like 3 seconds. However, when I tried it with 190 or more data directories, it was hanging somewhere until I killed it in a few hours.

I looked at hivemetastore.log and found it didn't proceed any more after having called 'get_partitions'.

I tested it with data with much more directories like thousands in another cluster where HDP-2.4.X is installed and it worked without any problem. And 'get_partitions_with_auth' was called instead of 'get_partitions'. I compared their configs one by one in ambari but don't see any difference.

Does anyone have any ideas?

1 ACCEPTED SOLUTION

avatar
New Contributor

We checked out the hive code, removed the patch which was causing this issue ( top two commits affecting org/apache/hadoop/hive/ql/metadata/HiveMetaStoreChecker.java). This issue was resolved after that.

View solution in original post

12 REPLIES 12

avatar
New Contributor

For my personal case, it does not help

avatar
Rising Star

@PJ - Would it be possible to share the hive.log when you observed this?

avatar
New Contributor

I have the same problem too, migrating to HDP2.5 was fatal for our heavy msck repair table treatment (to make an partitionned external table usable) Hive.log is empty when it happens Without any other solution, I'll have to recreate my own version of the tool... Sad to come to that end.