Member since
05-09-2016
421
Posts
54
Kudos Received
32
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3689 | 04-22-2022 11:31 AM | |
| 4168 | 01-20-2022 11:24 AM | |
| 3395 | 11-23-2021 12:53 PM |
04-22-2022
11:31 AM
Hi @ToddP Can you try after setting below? set hive.vectorized.execution.enabled=false;
... View more
04-19-2022
09:39 AM
This error does not looks to be coming from Cloudera JDBC driver. This error is loading org.apache.hive.jdbc. Cloudera's driver name start with com.cloudera*
... View more
04-14-2022
03:46 PM
Hi Video KB you are following is more than 2 years old. Try using Cloudera JDBC driver for hive. You can download the latest driver and user guide from below links. https://www.cloudera.com/downloads/connectors/hive/jdbc/2-6-17.html https://docs.cloudera.com/documentation/other/connectors/hive-jdbc/2-6-17.html
... View more
04-05-2022
02:24 PM
Did you try the hint mentioned in ERROR? Try : <hiveserevr2 host>:10000/; Are you missing a '/' after the hostname
... View more
01-20-2022
11:24 AM
1 Kudo
Yes, there are lot of places to check but without knowing what are you looking for you will be lost. You can start with what you see on screen / console where you run the query. In beeline you see tez job summary which has lot of details to look at. example of one of the tuning guide is below https://community.cloudera.com/t5/Community-Articles/Demystify-Apache-Tez-Memory-Tuning-Step-by-Step/ta-p/245279 Update: I see you are using cdh6 which does not have tez. You can refer below link for cdh https://docs.cloudera.com/documentation/enterprise/6/6.1/topics/admin_hive_tuning.html#concept_u51_lkv_cv
... View more
01-20-2022
10:37 AM
1 Kudo
Hi Ulises, This is expected. When you do select * without any complex aggregation / function hive can directly read the data from hdfs / files But in case of count it need to do computation which involve creating job and doing the required aggregation which will take time.
... View more
12-08-2021
01:22 PM
Please check official mysql documentation as well. https://dev.mysql.com/doc/refman/5.7/en/log-file-maintenance.html
... View more
11-23-2021
12:53 PM
Hi @Andyjmoss As you already pointed https://community.cloudera.com/t5/Support-Questions/How-are-number-of-mappers-determined-for-a-query... There is no limit per query, you can only adjust max and min grouping size to play around on mapper tasks. Would this then impact the structure of data stored by my data? No this only affects how much data each map task will get.
... View more
05-25-2017
07:52 PM
1 Kudo
Currently ranger does not support defining password of default admin users(rangerusersync and rangertagsync) during installation.
This is a security concern if you miss to update the password post installation as default password for these users is same as username, thereby compromising admin access to ranger.
There is plan to address this in future HDP release by allowing to define the passwords for admin users during installation, however for the time being below workaround can be used to enforce password policy.
Once ranger installation is successful one can use below rest api get and put command to change the password of any user.
This is helpful in case of automated cluster installation using ambari blueprint.
To change the password of the user first get the user details using below call: curl -s -u admin:admin -H "Accept: application/json" -H "Content-Type: application/json" -X GET http://h2.openstacklocal:6080/service/xusers/users/2
You will get something like this. {"id":2,"createDate":"2017-05-24T17:36:49Z","updateDate":"2017-05-25T08:09:25Z","owner":"Admin","updatedBy":"Admin","name":"rangerusersync","firstName":"rangerusersync","lastName":"","password":"*******","description":"rangerusersync","groupIdList":[],"groupNameList":[],"status":1,"isVisible":1,"userSource":0,"userRoleList":["ROLE_SYS_ADMIN"]}
Now send the PUT request with the above information after setting required password as give below: curl -iv -u admin:admin -X PUT -H "Accept: application/json" -H "Content-Type: application/json" http://h2.openstacklocal:6080/service/xusers/secure/users/2 -d '{"id":2,"createDate":"2017-05-24T17:36:49Z","updateDate":"2017-05-25T08:09:25Z","owner":"Admin","updatedBy":"Admin","name":"rangerusersync","firstName":"rangerusersync","lastName":"","password":"Password@123","description":"rangerusersync","groupIdList":[],"groupNameList":[],"status":1,"isVisible":1,"userSource":0,"userRoleList":["ROLE_SYS_ADMIN"]}'
PS : To change password of keyadmin user, use keyadmin credentials in REST api call. Thanks @Ronak bansal for testing this.
... View more
Labels:
03-21-2017
09:46 AM
SYMPTOM
Concurrency issues hit during multi-threaded moveFile issued when processing queries such as "INSERT OVERWRITE TABLE ... SELECT .."
The following pattern is displayed in stack trace: Loading data to table testdb.test_table from hdfs://xyz/ra_hadoop/.hive-staging_hive_2017-01-31_14-09-52_561_8101886747064006778-4/-ext-10000
ERROR : Failed with exception java.util.ConcurrentModificationException
org.apache.hadoop.hive.ql.metadata.HiveException: java.util.ConcurrentModificationException
at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:2883)
at org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:3140)
at org.apache.hadoop.hive.ql.metadata.Hive.loadTable(Hive.java:1727)
at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:353)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:89)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1745)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1491)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1289)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1156)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1151) ROOT CAUSE This issue occurs because of issue described in HIVE-15355.
WORKAROUND
Set the following property at the client side: set hive.mv.files.thread=0;
Re-run the hive query Note: The fix for HIVE-15355 is expected to be included in the next major release of HDP.
... View more