Member since
07-29-2015
535
Posts
140
Kudos Received
103
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4610 | 12-18-2020 01:46 PM | |
2882 | 12-16-2020 12:11 PM | |
1950 | 12-07-2020 01:47 PM | |
1514 | 12-07-2020 09:21 AM | |
989 | 10-14-2020 11:15 AM |
04-04-2019
01:49 PM
I jsut tested with ClouderaImpalaJDBC-2.6.4.1005 and it works for me with the following JDBC url. I can see in the query profile that it takes effect. static final String DB_URL = "jdbc:impala://localhost:21050/functional_parquet;mem_limit=3gb"; From the profile: Query Options (set by configuration): MEM_LIMIT=3221225472
... View more
03-29-2019
10:59 AM
1 Kudo
Hi @ChineduLB , There is no real difference between Impala and Hive tables - Impala and Hive should be able to read and write the same tables, including partitioned tables, etc.
... View more
03-26-2019
10:19 AM
Impala expect your UDF code and dependencies to be in a single .so, so you'd have to statically link any libraries you depend on.
... View more
03-25-2019
03:40 PM
1 Kudo
This isn't possible unless you include a timestamp or sequence number in every record. There's no concept of an order of rows built into Hive or Impala.
... View more
03-25-2019
12:36 AM
void FunnelInit(FunctionContext* context, StringVal* val) {
EventLogs* eventLogs = new EventLogs();
val->ptr = (uint8_t*) eventLogs;
// Exit on failed allocation. Impala will fail the query after some time.
if (val->ptr == NULL) {
*val = StringVal::null();
return;
}
val->is_null = false;
val->len = sizeof(EventLogs);
} I did another scan and the memory management in the above function is also slightly problematic - the memory attached to the intermediate StringVal would be better allocated from the Impala UDF interface so that Impala can track the memory consumption. E.g. see https://github.com/cloudera/impala-udf-samples/blob/bc70833/uda-sample.cc#L76 . I think the real issue though is the EventLogs data structure and lack of a Serialize() function. It's a somewhat complex nested structure with the string and vector. In order for the UDA to work, you need to have a Serialize() function that flattens out the intermediate result into a single StringVal. This is pretty unavoidable since Impala needs to be able to send the intermediate values over the network and/or write it to disk, and Impala doesn't know enough about your data structure to do it automatically. Our docs do mention this here https://www.cloudera.com/documentation/enterprise/latest/topics/impala_udf.html#udafs. Putting it into practice is a bit tricky. One working example is the implementation of reservoir sampling in Impala itself. Unfortunately I think it's a little over-complicated:https://github.com/apache/impala/blob/df53ec/be/src/exprs/aggregate-functions-ir.cc#L1067 The general pattern for complex intermediate values is to have a "header" that lets your determine whether the intermediate value is currently serialized, then either the deserialized representation, or the serialized representation after the "header" using a flexible array member or similar - https://en.wikipedia.org/wiki/Flexible_array_member. The Serialize() function will convert the representation by packing any nested structures into a single StringVal() with the header in front. Then other functions can switch back to the deserialized representation. Or in some cases, you can be clever and avoid the conversion in some case (that's what the reservoir sample function above is doing, and part of why it's overly complex). Anyway, a really rough illustration of the idea is as follows: struct DeserializedValue {
...
}
struct IntermediateValue {
bool serialized;
union {
DeserializedValue val;
char buf[0];
};
StringVal Serialize() {
if (serialized) {
// Just copy serialized representation to output StringVal
} else {
// Flatten val into an output StringVal
}
}
void DeserializeIfNeeded() {
if (serialized) {
// Unpack buf into val.
}
}
}; Just as a side note, the use of C++ builtin vector and string in the intermediate value can be problematic if they're large, since Impala doesn't account for the memory involved. But that's very much a second-order problem compared to it not working at all.
... View more
03-22-2019
08:38 PM
delete src.ptr; <-- that is a bug that will definitely cause Impala to crash if you run the UDA enough times. Impala manages that memory and it's definitely not valid to free it yourself! The Impala runtime will automatically manage memory for StringVal inputs.
... View more
03-14-2019
10:31 AM
I think you're probably running into this issue: https://issues.apache.org/jira/browse/IMPALA-8109 It would help to provide "SHOW FILES" output for the table and to provide the Impala version that you're running (i.e. output of "select version()")
... View more
03-14-2019
10:29 AM
What file format are you using? Can you attach an Impala query profile from the query?
... View more
03-07-2019
09:14 AM
Yeah we need to make some changes in Impala to optimise this case (large SELECT result sets) better. We have some of that work in Impala. If you're doing large extracts of data, it's often better to do a "CREATE TABLE AS SELECT" into a text table and download those files directly from the filesystem, if that's possible.
... View more
03-07-2019
09:02 AM
Oh, the best reference for building Impala is the apache wiki. https://cwiki.apache.org/confluence/display/IMPALA/Building+native-toolchain+from+scratch+and+using+with+Impala is a bit more hidden and talks about how to build the third party dependencies
... View more