Support Questions

Find answers, ask questions, and share your expertise

Can not ALTER or DROP a big partitionned tables - CAUSED BY: MetaException: Timeout when executing..

avatar
Master Collaborator

Hi,

I have a partitionned tables that have more than 50k partitions, it work a good except the Hive Metastore operations, like DROP and ALTER ... RENAME, I face this error message:

Query: drop table cars
ERROR: ImpalaRuntimeException: Error making 'dropTable' RPC to Hive Metastore: 
CAUSED BY: MetaException: Timeout when executing method: drop_table_with_environment_context; 600003ms exceeds 600000ms

I don't now if it's, a memory problem, or it's normal and I should adjust the timeout value.. ? if yes, which one ?

Thanks to help me ASAP.

3 ACCEPTED SOLUTIONS

avatar
Super Guru
Hi,

This is a known issue and we have an internal JIRA to track it. Current there is no better way to improve the performance.

You can, however, to increase the timeout limit via "hive.metastore.client.socket.timeout", and set it to long time value (in seconds) to allow the query to finish. It should finish eventually, but might take sometime.

Our engineers are still discussing internally to see the best fix for the issue, however, it is on going and we do not have solution yet at this stage.

View solution in original post

avatar
Super Guru
Hi,

The first one "Hive Metastore Connection Timeout" is the one you should try. If you look closely, "hive.metastore.client.socket.timeout" is just underneath it.

Regarding the JIRA ID, it is an internal JIRA, so you do not have access to it.

Another way is to setup a quick script to drop partition in batches and and then drop the table after number of partitions have reduced to a reasonable level.

View solution in original post

avatar
Super Guru

Hi Vibin,

 

I can see that this issue is tracked under HIVE-6980 on the upstream JIRA. And I can confirm that it has been fixed in CDH6.1 and CDH5.16 onwards.

 

If you are using older version, you can try to increase the value for hive.metastore.client.socket.timeout as a workaround as mentioned in the previous post.

 

Hope above is helpful.

 

Cheers

Eric 

View solution in original post

12 REPLIES 12

avatar
Master Collaborator
Note: I use Impala Shell v2.9.0-cdh5.12.0 (03c6ddb) built on Thu Jun 29 04:17:31 PDT 2017
Thanks in advance.

avatar
Cloudera Employee

Can you try dropping the table using Hive?

avatar
Master Collaborator

Thank you very muth for the answer man

I tried to drop it with the hive, but it give this error message:

hive> drop table cars;
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:hive.metastore.sasl.enabled can't be false in non-testing mode)

I think it's another problem that we have to fix, did you have any idea ?
NB: I use Sentry with hive.

avatar
Super Guru
Hi,

This is a known issue and we have an internal JIRA to track it. Current there is no better way to improve the performance.

You can, however, to increase the timeout limit via "hive.metastore.client.socket.timeout", and set it to long time value (in seconds) to allow the query to finish. It should finish eventually, but might take sometime.

Our engineers are still discussing internally to see the best fix for the issue, however, it is on going and we do not have solution yet at this stage.

avatar
Master Collaborator

Hi @EricL,
Thanks for the replay,

Can you give me the JIRA track link of this issue please ?
And about 'hive.metastore.client.socket.timeout' I find 3 config variables with the same name:

1- Hive: Hive Metastore Connection Timeout (5m).
2- Hive: in : Service Monitor Client Config Overrides (value: 60).
3- Impala: Catalog Server Hive Metastore Connection Timeout and Impala Daemon Hive Metastore Connection Timeout (value: 1h).


Can you help me to know which one is the concerned ?


Hope this issue iresolved ASAP.
Thanks again.

avatar
Super Guru
Hi,

The first one "Hive Metastore Connection Timeout" is the one you should try. If you look closely, "hive.metastore.client.socket.timeout" is just underneath it.

Regarding the JIRA ID, it is an internal JIRA, so you do not have access to it.

Another way is to setup a quick script to drop partition in batches and and then drop the table after number of partitions have reduced to a reasonable level.

avatar
Master Collaborator

Yes man, that what I do now in *drop* statement, but so far no easy solution for *alter*.
Thanks again @EricL

avatar
Super Guru
Glad that I am helpful here.

avatar
New Contributor

@EricL Just wanted to know if there is any progress on this issue ? I too face the same issue - wanted to drop a table with 50k partitions.