Member since
10-10-2017
181
Posts
5
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3608 | 08-23-2021 05:48 PM | |
9426 | 08-19-2021 06:47 PM | |
1053 | 07-30-2021 12:21 AM | |
2191 | 02-07-2020 04:13 AM | |
2237 | 01-09-2020 03:22 PM |
10-10-2021
10:35 PM
@RyanAtWork, Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. If you are still experiencing the issue, can you provide the information @robbiez has requested?
... View more
09-19-2021
02:24 PM
This time we did show table stats and show column stats on the table before the issue ( just after the restart and before running invalidate metadata) and then after the restart and we do not notice any difference in output but problem reoccurred this time also. To understand the sequence of events : (1) Admin team does patching and restart the cluster (2) application team run a distinct year query and sees out of 31 only 27 rows are present and 4 of the partitions are missing. (3) as a support team, I run show stats which shows all 31 partitions and number of rows in partitions is correct and size also. (4) we run invalidate metadata and refresh (5) support team takes show stats which has same result (6) app team runs they query again and they able to see all 31 partitions. we will work with vendor to get further help.
... View more
08-25-2021
09:34 PM
@Amn_468 Has any of the replies helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
08-19-2021
06:47 PM
This is a bug fixed by HIVE-15642. You can work around this issue by setting hive.metastore.dml.events to false.
... View more
08-10-2021
03:21 AM
You need a UDF to parse the XML string and extract the values. As far as I know, Impala doesn't have a native function to do it.
... View more
08-30-2020
11:11 PM
Can you be more specific, which load on the server? Any solutions? "If this issue happens intermittently, it might be caused by the load on the server."
... View more
02-07-2020
04:13 AM
If a sqoop job failed/crashed in the middle of importing a table, the table is imported. When you run this job again, it will start from zero so you need to clear the partially imported data first. Alternatively, if you know which rows are not imported yet, you can use the WHERE clause when you restart the job to import the rest rows.
... View more
01-09-2020
03:08 PM
Sorry for the delay. Here is an example of HAProxy configuration: https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/impala_proxy.html#tut_proxy. Please check if you set balance to SOURCE for ODBC proxy. I used to see some client (I can't remember if it's an ODBC or JDBC client) opened a connection and reused an existing session. If the balance is not set to SOURCE, HAProxy could forward the connection to another impala daemon so the client would hit 'Invalid query handle'. You can enable debug level logging for Impala ODBC Driver. After the issue occurs, you can trace the operations in ODBC driver logs. If the client opened a connection prior to "Invalid query handle", the LB could forward the connection to a wrong impala daemon. Then you can look up impalad logs by the timestamp to figure out which impala daemon the client connected to and check if the impala daemon is the coordinator of the query (it can be found from query profile). Otherwise, you need to trace the query id in the impalad coordinator logs to find out why it returned "Invalid query handle" to the client.
... View more
12-16-2019
12:49 PM
The error was "Password verification failed". You set SSLKeyStore to a truststore.jks file but it should be a keystore.jks file. Please change this property and try again.
... View more