Member since
01-25-2019
75
Posts
10
Kudos Received
13
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3032 | 02-25-2021 02:10 AM | |
| 1699 | 02-23-2021 11:31 PM | |
| 3492 | 02-18-2021 10:18 PM | |
| 4687 | 02-11-2021 10:08 PM | |
| 18469 | 02-01-2021 01:47 PM |
11-04-2020
01:32 AM
@drgenious Could you please connect to impala-shell and submit the same query just to bee confirmed that the error is not from impala.
... View more
11-02-2020
12:35 AM
Could you please print the error you are observing so that I can help you..
... View more
11-01-2020
06:10 AM
@HanzalaShaikh Could you try the below and see if this helps or not. grant SELECT on DATABASE `_impala_builtins` to role <role-name mapped to user hadmin>
... View more
10-29-2020
10:08 AM
Thanks for the reply. I found the issue, the Kerberos setup was fine the only thing missing was, providing the Kerberos Principal and Keytab path to the IMPALA_CATALOG_ARGS. In the CDH documentation ( CDH Impala Kerberos Point 7) 7. Add Kerberos options to the Impala defaults file, /etc/default/impala. Add the options for both the impalad and statestored daemons, using the IMPALA_SERVER_ARGS and IMPALA_STATE_STORE_ARGS variables that I followed they have only mentioned to update IMPALA_STATE_STORE_ARGS and IMPALA_SERVER_ARGS, that's why catalog server was not authenticating with Kerberos. After adding the the Kerberos Principal and keytab path I was able to start the without any issues.
... View more
10-26-2020
02:32 PM
@ParthiCyberPunk Unfortunately, you didn't share the connect string. below is an example you could use jdbc:hive2://host:10000/DB_name;ssl=true;sslTrustStore=$JAVA_HOME/jre/lib/security/certs_name;trustStorePassword=$password Substitute host,port, truststore location and certificate name and password accordingly. Keep me posted
... View more
10-25-2020
10:55 PM
@RandyGoering Check whether the below path exists on the host/node where you are submitting the sqoop job. /usr/lib/hadoop If it does, please remove the path which will further allow HADOOP HOME to be set to /opt/cloudera/parcels/CDH/lib/hadoop which is the correct path. for your reference:--> https://my.cloudera.com/knowledge/Sqoop-Command-Fails-after-Upgrade?id=70109
... View more
10-24-2020
11:40 PM
Hi Tushar Thanks a lot for your quick reply. The resolution you provided has worked. I am accepting it as a solution. Thanks once again.
... View more
10-23-2020
07:52 AM
Hi @tusharkathpal ! Thanks for the detailed explanation, really appreciate it! In my case, all tables were created beforehand, so all their static metadata should be already cached. However, there are commands emitted from clients to create partitions in Impala tables from time to time (every hour) and also refresh commands are periodically issued on those new partitions (every minute) to make parquet files inside them available to be queried in Impala. I can confirm that only a handful of tables were being ingested to during the HDFS switchovers. Probably the partition creation on "impala_table" or a refresh command on one of its partitions triggered a fetch of metadata from catalog server, which would explain why it happened only for "impala_table". About hive metatool command, it is listing the correct HDFS locations. I don't think it applies in my case, because HDFS is already deployed with the final nameservice in the config before hadoop starts up (i.e., there is no upgrade from non-HA to HA setup involved). About automatic invalidation of metadata, I will consider it for future Impala upgrades. It would help by handling the metadata change on "alter table add partition" command. However, I would need to change part of the ingestion pipeline due to this use case of adding files directly on the filesystem not supported.
... View more
10-20-2020
10:24 PM
@BI123 Please correct me if my understanding is right. So, SSRS (SQL Server Reporting Services) is a third party application for business intelligence and you want to use Impala as the service to pull data. My guess is you have setup ODBC driver connection for Impala (assuming your application supports ODBC) on your host and ensure SSRS to use the driver to work. Use the below documentation to setup ODBC driver. https://www.cloudera.com/downloads/connectors/impala/odbc/2-6-11.html
... View more
10-20-2020
10:13 AM
@kundansonuj Impala tables should have less than 30K partitions to ensure you get the necessary performances. The workaround would be to recreate tables with less partitions.
... View more
- « Previous
- Next »