Member since
03-06-2020
406
Posts
56
Kudos Received
37
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 393 | 08-29-2025 12:27 AM | |
| 1021 | 11-21-2024 10:40 PM | |
| 977 | 11-21-2024 10:12 PM | |
| 3048 | 07-23-2024 10:52 PM | |
| 2153 | 05-16-2024 12:27 AM |
12-08-2021
09:29 PM
@HareshAmin Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
12-01-2021
08:41 AM
Hi, The Reducer process the output of the mapper. After processing the data, it produces a new set of output. At last HDFS stores this output data. Reducer takes a set of an intermediate key value pair produced by the mapper as the input and runs a Reducer function on each of them. The mappers and reducers depends on the data that is processing. You can manually set the number of reducers with below property but i think it is not recommended. set mapred.reduce.tasks=xx; Regards, Chethan YM
... View more
11-10-2021
09:23 AM
2021-11-02 09:27:36,555 WARN org.apache.hive.common.util.RetryUtilities$ExponentiallyDecayingBatchWork: [HiveServer2-Background-Pool: Thread-180]: Exception thrown while processing using a batch size 15 org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Expecting a partition with name extract_date=2018-02-15, but metastore is returning a partition with name extract_date=2018-02-15 .) at org.apache.hadoop.hive.ql.metadata.Hive.createPartitions(Hive.java:2201) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.DDLTask$1.execute(DDLTask.java:2020) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.DDLTask$1.execute(DDLTask.java:1999) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hive.common.util.RetryUtilities$ExponentiallyDecayingBatchWork.run(RetryUtilities.java:93) [hive-common-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.DDLTask.createPartitionsInBatches(DDLTask.java:2027) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.DDLTask.msck(DDLTask.java:1918) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:413) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2200) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1843) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1563) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1339) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1334) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:256) [hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hive.service.cli.operation.SQLOperation.access$600(SQLOperation.java:92) [hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:345) [hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_231] at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_231] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) [hadoop-common-3.0.0-cdh6.2.1.jar:?] at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:357) [hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_231] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_231] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_231] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_231] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_231] Caused by: org.apache.hadoop.hive.metastore.api.MetaException: Expecting a partition with name extract_date=2018-02-15, but metastore is returning a partition with name extract_date=2018-02-15 . at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$add_partitions_req_result$add_partitions_req_resultStandardScheme.read(ThriftHiveMetastore.java:64399) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$add_partitions_req_result$add_partitions_req_resultStandardScheme.read(ThriftHiveMetastore.java:64358) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$add_partitions_req_result.read(ThriftHiveMetastore.java:64281) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_add_partitions_req(ThriftHiveMetastore.java:1819) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.add_partitions_req(ThriftHiveMetastore.java:1806) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
... View more
11-10-2021
04:26 AM
Hi @ChethanYM, 1)Unfortunately I haven't keep the full log from the query. 2)Exactly, this is my issue. 3)From impala shell. 3)If QUERY_TIMEOUT_S it is then it has the default value. Regards , Teo
... View more
11-08-2021
05:29 AM
@ighack Did you resolved your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
11-04-2021
11:38 PM
Hi, As per my previous comment can you destroy it and restart the LLAP and see if this works. Note: llap0 is the default application that will be running if LLAP is installed it will be recreated even if you destroy it and restart the service. #yarn app -destroy llap0 Regards, Chethan YM
... View more
11-03-2021
02:55 AM
Hi @ChethanYM load_catalog_in_background is unchecked, we were not observing any JVM pauses in Catalogue Logs, however, we were seeing RPC related alerts, we have restarted Impala and the issue seems to have been fixed. Thanks Wert
... View more
11-01-2021
05:01 AM
Hi @harnu , Could you add these below values to /etc/security/limits.conf file and see if this helpfull? If you already have these values try to increase it and rerun the query. hive soft nofile 65000 hive hard nofile 65000 hive soft proc 80000 hive hard proc 80000 Update limits.conf with above settings and restart hiveserver2 service while stopping all the services on that node. Please su - hive and then ulimit -a to check whether the parameters are updated for no of open files and max user process. Regards, Chethan YM Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
10-28-2021
05:23 AM
@pauljoshiva, Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
10-26-2021
03:57 AM
Thank you all for your responses. This issue was because of credential problem and it's resolved now.
... View more