Member since
04-11-2016
535
Posts
148
Kudos Received
77
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
9104 | 09-17-2018 06:33 AM | |
2379 | 08-29-2018 07:48 AM | |
3376 | 08-28-2018 12:38 PM | |
2866 | 08-03-2018 05:42 AM | |
2585 | 07-27-2018 04:00 PM |
07-17-2017
06:07 PM
@Dhiraj If the idea is to drop the database and all tables within, then the db directory from HDFS is sure to be removed.
Is the use case to remove the tables within than to remove the database and it's directory structure?
... View more
07-13-2017
07:34 AM
@Harish Nerella This scenario on updating a table (even two different rows) by two different processes at the same time is not possible at the moment for ACID tables.
Currently, ACID concurrency management mechanism works at a partition level for partitioned tables and table level for non partitioned (which I believe is our case). Basically what the system wants to prevent is 2 parallel transactions updating the same row. Unfortunately, it can't keep track of this at individual row level, it does it at partition and table level respectively. Refer Jira HIVE-13395 for more details.
... View more
07-12-2017
06:46 AM
@Federico D'Ambrosio The original error looks to be caused by a lib mismatch where storm-hive is set to use 0.9.0 ( https://github.com/hortonworks/storm-release/blob/HDP-2.5.3.0-tag/external/storm-hive/pom.xml ) when I think it should be set to the thrift.version variable from the parent storm-release pom (https://github.com/hortonworks/storm-release/blob/HDP-2.5.3.0-tag/pom.xml). This issue is addressed in HDP 2.5.5 and HDP 2.6.1.
... View more
07-10-2017
08:10 AM
2 Kudos
@pavan p You need to run the -setrep again to set to 3, so that other 2 replicas are removed. [hive@x ~]$ hdfs dfs -ls /tmp/hive/sgipbal_np2.avsc
-rw-r--r-- 5 hive hdfs55592 2017-01-16 17:41 /tmp/hive/sgipbal_np2.avsc
[hive@x ~]$hdfs dfs -setrep 3 /tmp/hive/sgipbal_np2.avsc
Replication 3 set: /tmp/hive/sgipbal_np2.avsc
[hive@x ~]$ hdfs dfs -ls /tmp/hive/sgipbal_np2.avsc
-rw-r--r-- 3 hive hdfs55592 2017-01-16 17:41 /tmp/hive/sgipbal_np2.avsc
... View more
07-10-2017
07:52 AM
Saurab Dahal
The issue seems to be on specific Node manager, verify below details: 1. tez.tar.gz could be corrupt, remove local cache file from NodeManager. 2. Restart Node Manager.
... View more
07-10-2017
04:54 AM
@Rishit shah Does the query fail while running select query? Seems to be related to HIVE-13877.
... View more
07-07-2017
07:46 AM
1 Kudo
@Rishit shah Try hplsql as
java -cp /home/devRht/hplsql/hplsql-0.3.17/hplsql.jar:/home/devRht/hplsql/hplsql-0.3.17/antlr-runtime-4.5.jar:$HADOOP_CLASSPATH org.apache.hive.hplsql.Hplsql "$@ "
... View more
07-07-2017
07:08 AM
@Ozhan Gulen There are certain known issues with respect to union all and tez vectorization. Could you share the execution plan for union all for more insight on the issue?
... View more
07-07-2017
05:54 AM
If the mysqldump is for different version other than Hive 2.1.1000 in HDP 2.6. Then do the following: 1. Stop Hive services from Ambari. 2. Create new database under MySQL as say hive2: mysql> create database hive2;
Query OK,1 row affected (0.00 sec)
mysql> grant all privileges on hive2.* to 'hive'@'%' identified by'hive';
Query OK,0 rows affected (0.00 sec) 3. Restore database as: mysql -u hive -phive hive2 < dumpfilename.sql 4. Update database connection string for mysql under Ambari -> Hive configs. 5. Save configurations and try restarting. Since there is different in VERSION, service startup would fail. 6. Run Hive schematool command to upgrade the schema as below: hive@ssnode260 bin]$ /usr/hdp/2.6.0.3-8/hive2/bin/schematool -upgradeSchema -dbType mysql 7. Restart Hive services from Ambari. If the Hive metadata version is same as Hive 2.1.1000 in HDP 2.6, then follow steps 1 through 5.
... View more
Labels: