Member since
07-17-2017
143
Posts
16
Kudos Received
17
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1510 | 07-03-2019 02:49 AM | |
1699 | 04-22-2019 03:13 PM | |
1412 | 01-30-2019 10:21 AM | |
8129 | 07-25-2018 09:45 AM | |
7421 | 05-31-2018 10:21 AM |
05-07-2019
03:58 AM
The same workaround worked for me too - also getting null when selecting from_unixtime(starttime). starttime is bigint. Is this a bug in Impala or ... ? Also, in hive the following query normally works: SELECT cast(starttime as TIMESTAMP) from dynatracelogs ORDER BY starttime desc LIMIT 100 but in Impala it returns nulls. ....
... View more
05-06-2019
05:12 PM
The output of "explain <query>" is often helpful too.
... View more
05-06-2019
03:27 PM
1 Kudo
Thanks to everyone who replied. It turns out that references to truststores and server keys, etc., and associated passwords may be cached, so when we changed these after moving the cluster, creating new cerrts and replacing the passwords in CDH was insufficient. So, after DELETING all fields containing passwords, cert locations, key locations, etc.,unchecking SSL, restarting the cluster, and adding the references back in, everything works. Uugghhh - who knew! 🙂 B
... View more
04-22-2019
04:07 PM
Thanks Will try that,do you have any suggestion on best way to implementing dimension with scd2 type in hadoop, our dimension table has several sources and all should be able to load /update concurrently in dimension table. #- Please type your reply above this line -##
... View more
02-09-2019
04:47 AM
Hi @Tim Armstrong While IMPALA-1618 steel open and unresolved, I confirmed that this "workaround" is safe and efficient (I'm using it on a large scope and during more than 9 months) so that this is the only solution I find to solve or -get around- this big problem. Hope that the main problem will be fixed ASAP. Thanks for the remark.
... View more
01-30-2019
10:21 AM
Hi @Rr, Please give us more details, errors messages or screenshots so we can help you.
... View more
11-05-2018
08:48 AM
I tried the alter command below in impala-shell 2.12.0 and kudu 1.7.0. However, I'm getting an error. My table is an external table in impala. The error message is strange. Of course the new table doesn't exists, I want to create it with the command... ALTER TABLE res_dhcp_int SET TBLPROPERTIES('kudu.table_name'='res_dhcp_int'); Query: ALTER TABLE res_dhcp_int SET TBLPROPERTIES('kudu.table_name'='res_dhcp_int') ERROR: TableLoadingException: Error loading metadata for Kudu table res_dhcp_int CAUSED BY: ImpalaRuntimeException: Error opening Kudu table 'res_dhcp_int', Kudu error: The table does not exist: table_name: "res_dhcp_int" is this a bug? EDIT: i just read IMPALA-5654, seems that with impala 2.12.0 this alter command doesn't work anymore! I need an alternative for that 😞
... View more
07-26-2018
02:17 AM
Hi @lonetiger You can do it by two kind of scripts: 1- show all tables: SHOW TABLES; get the list and run this query on all tables: DESCRIBE FORMATTED tableX; Then you can get the results and extract the owner, if it's the desired one, drop the table. 2- Connect with your hive metastore DB and get the table list of the owner you want SELECT "TBL_NAME"
FROM "TBLS"
WHERE "OWNER" = 'ownerX'; Then drop them. Good luck.
... View more
07-16-2018
01:37 PM
@AntonyNthanks for following up - glad to hear it!
... View more