Member since
11-11-2019
634
Posts
33
Kudos Received
27
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 269 | 10-09-2025 12:29 AM | |
| 4811 | 02-19-2025 09:43 PM | |
| 2125 | 02-28-2023 09:32 PM | |
| 4015 | 02-27-2023 03:33 AM | |
| 26017 | 12-24-2022 05:56 AM |
10-13-2021
02:16 AM
Please ignore the property type,please fill in key and value
... View more
10-12-2021
09:07 AM
@MikeB please provide the screenshot
... View more
10-07-2021
10:43 PM
@MikeB If you are sure about the solution you can add the property in "custom hive-site" and "custom hive-metastore-site" Add the property and value. Please accept this as solution,if this works
... View more
10-04-2021
02:24 AM
can you try prinipal=hive/_HOST
... View more
10-01-2021
09:56 AM
@alxKd In the beelone string could you please explicitly provide principal=zookeeper-mycluster/server.com@REALM.COM in the beeline connection string and try ?
... View more
09-30-2021
10:48 AM
1 Kudo
yes that was indeed a problem 🙂 I was about to comment on that
... View more
09-30-2021
10:30 AM
1 Kudo
@cortland I am able to acheive,please find my testcase abc.txt ==== "1","peter","He is Data enginer", "Senior Engineer" "2","Anee","Hadoop Engineer","Lead" "3","James","Data, Architect","Sr Architect" hdfs dfs -put abc.txt /user/hive/ create external table test_csv(num string ,name string ,work string ,designation string ) row format serde 'org.apache.hadoop.hive.serde2.OpenCSVSerde' with serdeproperties ( "separatorChar" = ',' ,"quoteChar" = '"' ) STORED AS TEXTFILE; LOAD DATA INPATH '/user/hive/abc.txt' INTO TABLE test_csv; select * from test_csv +---------------+----------------+---------------------+-----------------------+ | test_csv.num | test_csv.name | test_csv.work | test_csv.designation | +---------------+----------------+---------------------+-----------------------+ | 1 | peter | He is Data enginer | Senior Engineer | | 2 | Anee | Hadoop Engineer | Lead | | 3 | James | Data, Architect | Sr Architect | | 1 | peter | He is Data enginer | Senior Engineer | | 2 | Anee | Hadoop Engineer | Lead | | 3 | James | Data, Architect | Sr Architect | +---------------+----------------+---------------------+-----------------------+ Please accept it as solution,if your queries are answered and the testcase works in your scenario.
... View more
09-30-2021
10:04 AM
@BigData-suk When you say it is hung at reducer level,so all the containers take more time or few containers in the reudcer takes lot of time and hung. There is a data skewness,if few conatimers at reducer level takes time. You have to re-write the query if there is a data skweness
... View more
09-30-2021
01:07 AM
1 Kudo
@Mas_Jamie apologies. Unfortunately we dont have the document 😞
... View more
09-30-2021
12:00 AM
@enirys For a memory crash,we need heapdump. Please append the below in JAVA_OPTIONS of hiveserver2 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/disk2/dumps Make sure you provide the path correctly. Whenever there is a crash,an hprof file would be generated. You can use Eclipse MAT or jxray to analyze the leak suspect. You can also take heapdump on demand ,using "jmap" utility when the consumption is 80% jmap -dump:live,format=b,file=/disk2/dumps/dump.hprof <PID of Hiveserver2> Please let us know,if the queries are answered. Please ""Accept This a solution" if your queries are answered.
... View more