Member since
09-16-2021
423
Posts
55
Kudos Received
39
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 842 | 10-22-2025 05:48 AM | |
| 850 | 09-05-2025 07:19 AM | |
| 1618 | 07-15-2025 02:22 AM | |
| 2222 | 06-02-2025 06:55 AM | |
| 2439 | 05-22-2025 03:00 AM |
02-04-2025
06:35 AM
Check beeline console output and HS2 logs to identify where it gets stuck and act accordingly.
... View more
02-04-2025
06:33 AM
Use CAST to convert to TIMESTAMP type. SELECT CAST('2024-11-05 10:03:17.872195' AS TIMESTAMP) AS timestamp_value; We can also try TIMESTAMP WITH LOCAL TIME ZONE, This helps retain precision when dealing with timezones. SELECT CAST('2024-11-05 10:03:17.872195' AS TIMESTAMP WITH LOCAL TIME ZONE);
... View more
02-04-2025
05:55 AM
It appears that the user 'xxxx' has not been synchronized back from LDAP to the local OS on the relevant host. There is a possibility that it could be due to misconfiguration on the AD/LDAP side, preventing correct username resolution and causing the synchronization to fail. Resolve AD/LDAP side problem to overcome this problem. Also Document for CDP 7.1.7
... View more
12-26-2024
08:21 PM
1 Kudo
The error message Invalid SessionHandle: SessionHandle commonly occurs in Hive when there is an issue with the session handle being used. A session handle in Hive is a unique identifier for a session created when a user connects to Hive, used to maintain the state and context of the session. One possible scenario for this error is when a table contains a large number of records and the cluster has multiple HS2 instances. If Knox is used to connect to Hive, Knox might connect to one HS2 and run a query. However, due to the large number of records, the query takes longer to process. If the connection times out from Knox's end and reconnects to another HS2, the query might fail with the "Invalid SessionHandle" error. To investigate this scenario, it is recommended to check the HS2 logs and Knox logs. Additionally, to determine why the query is running long, checking the HS2 and appLogs for any yarn job initiated by HS2 can provide further insights.
... View more
11-28-2024
11:47 PM
1 Kudo
First of all , It is not recommended to use the same location for both internal and external tables. Internal tables in Hive are native tables that are fully controlled by Hive itself. External tables, on the other hand, can be accessed by other components such as Spark, Impala, and File system operations,.....etc. Since External tables are used by other components, their corresponding locations need to be relied upon. To read the files and obtain the count, Hive launches a MapReduce job for external tables. It is recommended to use Managed tables if other components are not utilizing the corresponding table.
... View more
11-25-2024
11:05 PM
1 Kudo
From the attached console output noticed AM failed to submit the DAG. ERROR : Failed to execute tez graph.
org.apache.tez.dag.api.SessionNotRunning: TezSession has already shutdown. Application application_1730584072947_0051 failed 1 times (global limit =2; local limit is =1) due to AM Container for appattempt_1730584072947_0051_000001 exited with exitCode: 1 Please check the appLogs application_1730584072947_0051 to identify the root cause of the failure.
... View more
11-22-2024
05:29 AM
1 Kudo
The job failed with an OutOfMemoryError (OOME) at the child task attempt level, as indicated by the stacktrace. It was observed that certain mapreduce properties have been set, which may potentially overwrite the hive.tez.container.size property. SET mapreduce.map.java.opts=-Xmx3686m;
SET mapreduce.reduce.java.opts=-Xmx3686m;
SET mapred.child.java.opts=-Xmx10g; It is recommended to validate the yarn appLogs to confirm if the child task attempts were launched with 80% of the hive.tez.container.size. If not, it is advised to remove the mapreduce configurations and try re-running the job. Before re-running the query, it is suggested to collect statistics for all the source tables. This will assist the optimizer in creating a better execution plan.
... View more
11-22-2024
04:56 AM
1 Kudo
If you suspect that TEZ-4032 is the cause, consider upgrading your cluster to CDP 7 and testing it again, as it has been backported in CDP 7.
... View more
11-21-2024
10:14 PM
Ideally below should work. use default;show tables; Please check HiveServer2, HMS logs and share the stack-trace. To identify the RootCause.
... View more
11-18-2024
05:08 AM
1 Kudo
The query seems to be failing during the compilation phase, indicating a possible issue with its syntax. Error: Error while compiling statement: FAILED: ParseException line 1:11 missing EOF at ';' near 'default' (state=42000,code=40000) It is important to validate the SQL syntax of the query to identify any potential syntax errors that may be causing the problem. Another point to consider is ensuring that the column names in the query match with those in the source and target tables, as any mismatch can lead to errors.
... View more