Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

"Failed to load metadata" exception raised for only one of the impala tables after hadoop namenode switchover event

avatar
Explorer

While querying one of the Impala tables (out of hundreds of tables under the same database), I got the following exception:

 

select count(*) from impala_table;
Query: select count(*) from impala_table
Query submitted at: 2020-10-23 03:12:42 (Coordinator: https://impalad:25000)
ERROR: AnalysisException: Failed to load metadata for table: 'impala_table'
CAUSED BY: TableLoadingException: Failed to load metadata for table: impala_db.impala_table
CAUSED BY: RemoteException: Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error
        at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:88)
        at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1962)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1421)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3055)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1151)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:940)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
        at java.base/java.security.AccessController.doPrivileged(Native Method)
        at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)

 

 

This event happened after hadoop was switched over multiple times (high availability setup). The curious thing is that this error happened only for this table in particular, while all other hundreds of tables could be still queried without any issues.

 

"refresh impala_table;" command raised the exactly same exception as above. "invalidate metadata;" command was a valid workaround and all tables could be queried again after that.

 

We use postgresql database for hive metastore.

 

Questions:

1) Do you have any explanation why this issue could happen only for this table in particular, since they all use the same underlying Hive metastore and were queried from the same impalad?

2) Is there any reference to standby namenode stored in Hive metastore that could be still routing the query to look at the metadata on standby namenode for a particular table? I know that connections for each table are stored in the "SDS" table of hive metastore in non-HA setups, but I haven't seen that on high availability setups and it seems there is nothing interesting stored in "SDS" for non-HA setups.

3) Is there a way in Impala to automatically invalidate the metadata for a table when it is failing to load metadata? I was expecting that eventually Impala would recover from this issue, but I had to manually run invalidate metadata command. If not, how could be a good approach to detect this issue through monitoring?

 

Thanks a lot!

1 ACCEPTED SOLUTION

avatar
Expert Contributor

@PauloRC 

 

-->With regards to your query point 1, When a table is queried first time, impala pulls the metadata from catalog which further gets it from HMS and then Impalad stores the metadata in the cache. So, next time when the query is run on the same table, impala does not have to again reach out to catalog server to get the metadata.

In your case, you were able to submit queries on other tables because those tables metadata was already available locally with impalad(check the query profile and it can be found whether metadata was pulled from catalog or fetched locally). But for table "impala_table", metadata was not present locally and it was trying to fetch this from catalog server. Now due to failover, HMS was somehow still pointing to standby NN which is why you see the below error.

 

CAUSED BY: TableLoadingException: Failed to load metadata for table: impala_db.impala_table
CAUSED BY: RemoteException: Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error

 
-->For your point 2, I would like to check if you followed the below or not.

 

 https://docs.cloudera.com/documentation/enterprise/5-14-x/topics/cdh_hag_hdfs_ha_cdh_components_conf...

 

-->For your point 3, here is the solution below in the link.

 

https://docs.cloudera.com/documentation/enterprise/latest/topics/impala_metadata.html#auto_poll_hms_...

View solution in original post

2 REPLIES 2

avatar
Expert Contributor

@PauloRC 

 

-->With regards to your query point 1, When a table is queried first time, impala pulls the metadata from catalog which further gets it from HMS and then Impalad stores the metadata in the cache. So, next time when the query is run on the same table, impala does not have to again reach out to catalog server to get the metadata.

In your case, you were able to submit queries on other tables because those tables metadata was already available locally with impalad(check the query profile and it can be found whether metadata was pulled from catalog or fetched locally). But for table "impala_table", metadata was not present locally and it was trying to fetch this from catalog server. Now due to failover, HMS was somehow still pointing to standby NN which is why you see the below error.

 

CAUSED BY: TableLoadingException: Failed to load metadata for table: impala_db.impala_table
CAUSED BY: RemoteException: Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error

 
-->For your point 2, I would like to check if you followed the below or not.

 

 https://docs.cloudera.com/documentation/enterprise/5-14-x/topics/cdh_hag_hdfs_ha_cdh_components_conf...

 

-->For your point 3, here is the solution below in the link.

 

https://docs.cloudera.com/documentation/enterprise/latest/topics/impala_metadata.html#auto_poll_hms_...

avatar
Explorer

Hi @tusharkathpal ! Thanks for the detailed explanation, really appreciate it!

 

In my case, all tables were created beforehand, so all their static metadata should be already cached. However, there are commands emitted from clients to create partitions in Impala tables from time to time (every hour) and also refresh commands are periodically issued on those new partitions (every minute) to make parquet files inside them available to be queried in Impala.

I can confirm that only a handful of tables were being ingested to during the HDFS switchovers. Probably the partition creation on "impala_table" or a refresh command on one of its partitions triggered a fetch of metadata from catalog server, which would explain why it happened only for "impala_table".

 

About hive metatool command, it is listing the correct HDFS locations. I don't think it applies in my case, because HDFS is already deployed with the final nameservice in the config before hadoop starts up (i.e., there is no upgrade from non-HA to HA setup involved).

 

About automatic invalidation of metadata, I will consider it for future Impala upgrades. It would help by handling the metadata change on "alter table add partition" command. However, I would need to change part of the ingestion pipeline due to this use case of adding files directly on the filesystem not supported.