Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Failed to load metadata

avatar
Contributor

Hi,

 

For my table, I am regenerating for a partition, what i do is

 

1) remove from hdfs

2) insert to the partition from my staging table

3) invalidate metadata on the table

4) compute incremental stat on the partition

 

from time to time, i see this on step 4

 


ERROR: AnalysisException: Failed to load metadata for table: 'ABC'
CAUSED BY: TableLoadingException: Failed to load metadata for table: default.ABC. Running 'invalidate metadata default.ABC' may resolve this problem.
CAUSED BY: MetaException: Object with id "" is managed by a different persistence manager

 

 

Thanks

Shannon

1 ACCEPTED SOLUTION

avatar
Cloudera Employee

Per documentation, you should be using the refresh command after modifying an impala table in the way that you have described (modifying the HDFS directly). The refresh command has been documented in the following link:

 

https://www.cloudera.com/documentation/enterprise/5-10-x/topics/impala_refresh.html

 

You should execute a refresh command after removing the file from the HDFS. You do not need to execute the refresh command if you are adding a new partition in Impala, it is only needed if you are doing it via Hive or HDFS.

View solution in original post

2 REPLIES 2

avatar
Contributor

Sometimes see the similar error during step 2) as well. Do i need to run invalidate after step 1?

avatar
Cloudera Employee

Per documentation, you should be using the refresh command after modifying an impala table in the way that you have described (modifying the HDFS directly). The refresh command has been documented in the following link:

 

https://www.cloudera.com/documentation/enterprise/5-10-x/topics/impala_refresh.html

 

You should execute a refresh command after removing the file from the HDFS. You do not need to execute the refresh command if you are adding a new partition in Impala, it is only needed if you are doing it via Hive or HDFS.