Member since
11-17-2017
76
Posts
7
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2382 | 05-11-2020 01:31 AM | |
575 | 04-14-2020 03:48 AM | |
3688 | 02-04-2020 01:29 AM | |
791 | 10-17-2019 01:26 AM | |
2506 | 09-24-2019 01:46 AM |
02-19-2021
01:21 PM
Hi tmater, sorry for the delay, yea the user does exist in the directory in that OU. I actually do have an update on this, so originally the cloudera cluster(on AWS network) authenticates against my ldap server(on premise office network) via the WAN traffic. I did open port 389 and 636 both UDP and TCP on my ldap server and firewall. That didn't work. I just finished setting up a VPN tunnel between AWS and on premise network, I use the LAN ip for ldap settings on impala, and now it works. So i'm not sure if there's any additional ports needed to be open for the impala LDAP authentication or I did something wrong. But everything works now through the ldap and VPN tunnel.
... View more
09-20-2020
12:27 PM
Hello there, I am Pradheep, it seems currently you are experiencing two problems. 1. From AWS Linux host, you are unable to connect Impala daemon and getting SSL related error. 2. In Impala-Shell, also you are unable to connect. For Impala-shell, instead of "impala-shell -i my_host_url:21050" command, I recommend you to check with CM > Impala > Connection String example command and share the result here. Based on the output, we will check for SSL issue with ODBC driver.
... View more
05-14-2020
09:44 AM
Hi @parthk ,
I'm happy to see you have found the resolution to your issue. Can you kindly mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future?
Thanks,
Vidya
... View more
04-14-2020
03:48 AM
Hi @Jumo, CDH 6.3.x is packaged with Impala 3.2, the packaging details can be found on this page. The 2.6.9 Impala ODBC driver can be used with on CDH 6.3.x. I understand that the recommendation can be confusing and reached out internally to update the documentations.
... View more
02-04-2020
01:29 AM
1 Kudo
Hi @kentlee406, From the images it looks like that Kudu is not installed on the QuickStart VM: Kudu service can not be seen in the cluster services list Impala can not see any Kudu service on the config page Could you try adding Kudu service to the cluster, please see the steps in our documentation here.
... View more
11-11-2019
05:58 AM
Hi @Asad, Impala does not fully support unicode characters at the moment, please see 'Character sets' chapter of our documentation here for more information. Could you advise if the data is stored in UTF-8?
... View more
10-17-2019
01:26 AM
Hi @ChineduLB, UDFs let you code your own application logic for processing column values during an Impala query. Adding a refresh/invalidate to it could cause unexpected behavior during value processing. A general recommendation for Invalidate metadata/Refresh is to execute it after the ingestion finished. This way the Impala user does not have to worry about the staleness of the metadata. There is a blogpost on how to handle "Fast Data" and make it available to Impala in batches: https://blog.cloudera.com/how-to-ingest-and-query-fast-data-with-impala-without-kudu/ Additionally, just wanted to mention that the Invalidate metadata/Refresh can be executed from beeline as well, just need to connect from beeline to Impala, this blogpost has the details: https://www.ericlin.me/2017/04/how-to-use-beeline-to-connect-to-impala/
... View more
10-11-2019
06:59 AM
Hi @Shruhti, This indeed odd, my first assumption would be that the 'select 1' queries are triggered by a client application such as a BI tool silently. Maybe to check/keep the connection alive? Might worth checking the trace level driver logs, that could verify if the queries are coming from a tool/application. This can be done by changing the driver log level, which is described here for ODBC. Additionally, the query profile contains a Network Address as well, this should help confirm whether the source of the query is valid.
... View more
10-11-2019
06:32 AM
Hi @Nisha2019, This example seems like a snippet from our documentation here. Just above this example DESCRIBE statement there is a sample CREATE TABLE query that generates this table schema, please see bellow. As per ingesting data into these tables, Impala does not support creating data with complex type columns currently, Loading Data Containing Complex Types describes it in more detail. Additionally, some more information can be found in the Complex type considerations chapter. Hive does not support inserting values to a parquet complex type one-by-one either, but there are two solutions: Creating a temporary table with values, then transform it to Parquet complex type with Hive, please see our documentation here for sample queries: Constructing Parquet Files with Complex Columns Using Hive Using INSERT INTO ... SELECT <values> query, for inserting records one by one, reference queries can be found in the description of IMPALA-3938. Please note that this will generate separate files for each records that occasionally need to be compacted. CREATE TABLE struct_demo
(
id BIGINT,
name STRING,
-- A STRUCT as a top-level column. Demonstrates how the table ID column
-- and the ID field within the STRUCT can coexist without a name conflict.
employee_info STRUCT < employer: STRING, id: BIGINT, address: STRING >,
-- A STRUCT as the element type of an ARRAY.
places_lived ARRAY < STRUCT <street: STRING, city: STRING, country: STRING >>,
-- A STRUCT as the value portion of the key-value pairs in a MAP.
memorable_moments MAP < STRING, STRUCT < year: INT, place: STRING, details: STRING >>,
-- A STRUCT where one of the fields is another STRUCT.
current_address STRUCT < street_address: STRUCT <street_number: INT, street_name: STRING, street_type: STRING>, country: STRING, postal_code: STRING >
)
STORED AS PARQUET;
... View more