09-26-2017 04:22 AM
I have a partitionned tables that have more than 50k partitions, it work a good except the Hive Metastore operations, like DROP and ALTER ... RENAME, I face this error message:
Query: drop table cars ERROR: ImpalaRuntimeException: Error making 'dropTable' RPC to Hive Metastore: CAUSED BY: MetaException: Timeout when executing method: drop_table_with_environment_context; 600003ms exceeds 600000ms
I don't now if it's, a memory problem, or it's normal and I should adjust the timeout value.. ? if yes, which one ?
Thanks to help me ASAP.
Solved! Go to Solution.
10-02-2017 04:01 AM
10-02-2017 09:26 AM - edited 10-02-2017 09:42 AM
Thank you very muth for the answer man
I tried to drop it with the hive, but it give this error message:
hive> drop table cars; FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:hive.metastore.sasl.enabled can't be false in non-testing mode)
I think it's another problem that we have to fix, did you have any idea ?
NB: I use Sentry with hive.
10-02-2017 04:01 PM
10-03-2017 03:58 AM
Thanks for the replay,
Can you give me the JIRA track link of this issue please ?
And about 'hive.metastore.client.socket.timeout' I find 3 config variables with the same name:
1- Hive: Hive Metastore Connection Timeout (5m).
2- Hive: in : Service Monitor Client Config Overrides (value: 60).
3- Impala: Catalog Server Hive Metastore Connection Timeout and Impala Daemon Hive Metastore Connection Timeout (value: 1h).
Can you help me to know which one is the concerned ?
Hope this issue iresolved ASAP.
10-04-2017 03:38 AM
10-06-2017 10:13 AM
Yes man, that what I do now in *drop* statement, but so far no easy solution for *alter*.
Thanks again @EricL