Member since
03-02-2021
43
Posts
1
Kudos Received
0
Solutions
03-10-2023
02:07 AM
Below are the version details CDP 7.1.8 CM : 7.8.1 HIVE : 3.1.3 Spark 2 Version: 2.4.8 Spark 3 Version: 3.3.0 ISSUE DECRIPTION : We have table with more than 3 million rows. We are not able to execute conditional QUERY with "WHERE", COUNT, with Spark execution engine in hive When we set to hive execution engine to spark (set hive.execution.engine=spark) we get the error mentioned below : QUERY FAILED : SELECT * FROM test_.JOBS__PROJECT WHERE state = 'DONE' LIMIT 10; ERROR : FAILED: Execution Error, return code 30041 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client for Spark session c-47f2-aceb-22390502b303 Error: Error while compiling statement: FAILED: Execution Error, return code 30041 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client for Spark session d6d96da5-f2bc-47f2-aceb-22390502b303 (state=42000,code=30041) We are able to execute the same query with execution engine set to tez and also able to execute from spark-shell. Also to note we are successfully able to execute non-conditional query with Spark execution engine SUCCESS QUERY : SELECT * FROM test_.JOBS__PROJECT LIMIT 10;
... View more
03-09-2023
09:45 AM
Any solution found for this ? I am having same issue, both HiveServer and Spark Gateway present on same node where executing query
... View more
02-16-2023
04:26 AM
Below are the version details CDP 7.1.8 CM : 7.8.1 HIVE : 3.1.3 We are trying to insert data into partitioned table using ORC file. It contains approx 50,000 rows Using below command to load data LOAD DATA INPATH '/user/test/r_14_5.orc' INTO TABLE va_offer_16; we see that all the map and reducer task are completed withing 500s. ---------------------------------------------------------------------------------------------- VERTICES MODE STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED ---------------------------------------------------------------------------------------------- Map 1 .......... container SUCCEEDED 7 7 0 0 0 0 Reducer 2 ...... container SUCCEEDED 33 33 0 0 0 0 ---------------------------------------------------------------------------------------------- VERTICES: 02/02 [==========================>>] 100% ELAPSED TIME: 2601.60 s ---------------------------------------------------------------------------------------------- However further there is no progress for next 2000 sec before failure with below error. ERRROR : ERROR : FAILED: Execution Error, return code 40000 from org.apache.hadoop.hive.ql.exec.MoveTask. org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out INFO : Completed executing command(queryId=hive_20230216163408_de7cb993-2086-4011-8788-50f46ed6e7f3); Time taken: 2605.485 seconds INFO : OK Error: Error while compiling statement: FAILED: Execution Error, return code 40000 from org.apache.hadoop.hive.ql.exec.MoveTask. org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out (state=08S01,code=40000)
... View more
01-16-2023
03:35 AM
Hello Team, Facing bellow issues with hive while loading data from HDFS. Following are the error: Error 1 ERROR : Vertex failed, vertexName=Reducer 2, vertexId=vertex_1672408587164_0154_2_01, diagnostics=[Task failed, taskId=task_1672408587164_0154_2_01_000001, diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( failure ) : attempt_1672408587164_0154_2_01_000001_0:java.lang.RuntimeException: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing vector batch (tag=0) (vectorizedVertexNum 1) Error 2 Caused by: org.apache.hadoop.hive.ql.metadata.HiveFatalException: [Error 20004]: Fatal error occurred when node tried to create too many dynamic partitions. The maximum number of dynamic partitions is controlled by hive.exec.max.dynamic.partitions and hive.exec.max.dynamic.partitions.pernode. Maximum was set to 2000 partitions per node, number of dynamic partitions on this node: 2001 Error 3 ERROR : Vertex failed, vertexName=Map 1, vertexId=vertex_1672408587164_0175_1_00, diagnostics=[Vertex vertex_1672408587164_0175_1_00 [Map 1] killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: offer_15__temp_table_for_load_data__ initializer failed, vertex=vertex_1672408587164_0175_1_00 [Map 1], java.lang.RuntimeException: ORC split generation failed with exception: java.io.IOException: Illegal type id 0. The valid range is 0 to -1 Please suggest the resolution for above errors.Thanks.
... View more
Labels:
01-16-2023
03:28 AM
Hello Team, I am facing below issues with CDP Yarn dashboard. Following are the screenshots . CDP version - 7.1.7 | CM version: 7.6.5 | Cloudera Runtime Screenshot 1 Screenshot 2 Please help me with the solutions on this.Thanks.
... View more
Labels:
- Labels:
-
Cloudera Data Platform (CDP)
09-21-2022
02:09 AM
1 Kudo
@Shelton updating the password option for x_portal_user table option --->This option I have already tried after inserting the values in empty set. But its in my case its not working
... View more
09-16-2022
12:59 PM
Hello Team, I have CDP Setup on premise and I am unable to login in ranger admin console. Details CDP: 7.1.7 | CM : 7.6.5 | Non Kerberised | Database : Mysql for all services I have tried all the below ways https://cloudera.ericlin.me/2020/02/how-to-update-ranger-web-ui-admin-users-password/ https://community.cloudera.com/t5/Support-Questions/Ranger-Password-reset/m-p/280992 In my case table: x_portal_user under ranger database was empty set. SO I manually inserted values for fields insert into x_portal_user(id,first_name,login_id,password,status) values(1,'Admin','admin','ceb4f32325eda6142bd65215f4c0f371',1); With the above query it got inserted into table. Then did FLUSH PRIVILEGES , commit steps in mysql database. Also tried further insert one more insert statement like above. On Ranger Side configuration: I have disable kerberos Admin Authentication Method: Unix Please help/suggest with the solution. Thanks
... View more
07-15-2022
12:57 AM
hello Team, We need spark 3.2.1 and latest hive with it. So I have check and found its present in CDP 7.1.7 Can we install CDP 7.1.7 on premise physical machines? Please suggest/help with my above question. Thanks
... View more
Labels:
07-13-2022
06:50 AM
It further proceeds with starting job and get urls but jobs doesn't get complete and remains in hang state.
... View more
07-11-2022
11:33 PM
Hi @jagadeesan Tried this option already with set but no luck
... View more