Member since
07-18-2016
262
Posts
12
Kudos Received
21
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6666 | 09-21-2018 03:16 AM | |
3193 | 07-25-2018 05:03 AM | |
4136 | 02-13-2018 02:00 AM | |
1929 | 01-21-2018 02:47 AM | |
37936 | 08-08-2017 10:32 AM |
08-09-2017
02:56 AM
Beeline connectivity working fine as below, if you gettitng error , kinldy share full work log for clear understanding ? user@CENT:~> beeline
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.3.2.0-2950/spark/lib/spark-assembly-1.4.1.2.3.2.0-2950-hadoop2.7.1.2.3.2.0-2950.jar!/org/slf4j/impl/StaticLoggerBinder.class
SLF4J: Found binding in [jar:file:/usr/hdp/2.3.2.0-2950/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Beeline version 1.2.1.2.3.2.48-5 by Apache Hive
beeline> !connect jdbc:hive2://HDP(Hive server Name):10000
Connecting to jdbc:hive2://HDP:10000
Enter username for jdbc:hive2://HDP:10000: user
Enter password for jdbc:hive2://HDP:10000: Password
Connected to: Apache Hive (version 1.2.1.2.3.2.48-5)
Driver: Hive JDBC (version 1.2.1.2.3.2.48-5)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://HDP:10000> use database;
... View more
08-08-2017
10:32 AM
Finally Worked for Me and did some work around. Steps as below. 1) Create Temp table with same columns. 2) Overwrite table with required row data. 3)Drop Hive partitions and HDFS directory. 4)Insert records for respective partitions and rows. 5) verify the counts. 1) hive> select count(*) from emptable where od='17_06_30' and ccodee=!'123';
OK
27
hive> select count(*) from emptable where od='17_06_30' and ccodee='123';
OK
7
hive>show create table emptable_tmp; :- Note hdfs location
2)Create table and overwrite with required partitioned data
hive> CREATE TABLE `emptable_tmp`(
'rowid` string,PARTITIONED BY (`od` string)
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.SequenceFileInputFormat';
hive> insert into emptable_tmp partition(od) select * from emptable where od='17_06_30' and ccodee!='123';
Time taken for adding to write entity : 1
Partition database.emptable_tmp{od=17_06_30} stats: [numFiles=20, numRows=27,totalSize=6216,rawDataSize=5502]
OK
3) Drop Partitions from Hive and HDFS directory as well, as this is External table.
hive> alter table emptable drop partition(od='17_06_30');
Dropped the partition od=17_06_30
OK
Time taken: 0.291 seconds
HDFS partition deletion
#hdfs dfs -rm -r /hdfs/location/emptable/ods='17_06_30'
4) Insert data for that partition only.
hive> insert into emptable partition(od) select * from emptable_tmp;
Partition database.emptable{ds=17_06_30} stats: [numFiles=66, numRows=20, totalSize=5441469982, rawDataSize=]
OK
Time taken: 27.282 seconds
5) Verifying the counts on partitions and respective rows data
1) hive> select count(*) from emptable where od='17_06_30' and ccodee=!'123';
OK
27
hive> select count(*) from emptable where od='17_06_30' and ccodee='123';
OK
0
... View more
08-08-2017
06:30 AM
Looks like its not working for the partitioned tables, pleaes verify the logs.
... View more
08-08-2017
05:02 AM
Looks like its not working for the partitioned tables, pleaes verify the logs. Before Delete :- Counts
hive> select count(*) from emptable where ods='2017_06_30' and code!='123';
OK
12
Time taken: 32.57 seconds, Fetched: 1 row(s)<br>
Delete Command
hive> set hive.support.concurrency=true;
hive>set hive.enforce.bucketing=true;
hive>set hive.exec.dynamic.partition.mode=nonstrict;
hive>set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
hive> delete emptable where ods='2017_06_30' and code!='123';
Usage: delete [FILE|JAR|ARCHIVE] <value> [<value>]*
Query returned non-zero code: 1, cause: null
After delete still same recrods :-
hive> select count(*) from emptable where ods='2017_06_30' and code!='123';
OK
12
Time taken: 26.406 seconds, Fetched: 1 row(s)
... View more
08-07-2017
04:51 AM
thanks for reply, yep tried to delete using command as below. hive>delete emp_table where ods='2017_006_30'and id=1;
Usage:delete[FILE|JAR|ARCHIVE]<value>[<value>]
*Query returned non-zero code:1, cause:null Out put got as deleted, actually rows are not deleted from table. Hive Version:- 1.2.1+
... View more
08-06-2017
11:32 PM
hive> delete from student where ods='2017_006_30' and id=1;
Usage: delete [FILE|JAR|ARCHIVE] <value> [<value>]*
Query returned non-zero code: 1, cause: null I am getting result, however data is not getting deleted from hive table ?
... View more
Labels:
- Labels:
-
Apache Hive
07-12-2017
05:44 AM
Our Hive Server2 going down for this issue. Have anyone implemented this steps and it working fine. https://community.hortonworks.com/articles/591/using-hive-with-pam-authentication.html
... View more
06-18-2017
04:22 PM
It went fine after restart httpd service went fine [root@repository ~]# service httpd status
httpd (pid 2030) is running...
[root@repository ~]#
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: centos.exabytes.com.my
* extras: centos.exabytes.com.my
* updates: centos.exabytes.com.my
repo id repo name status
HDP-2.1 HDP-2.1 98
HDP-UTILS-1.1.0.19 HDP-UTILS-1.1.0.19 48
Updates-ambari-2.2.0.0 ambari-2.2.0.0 - Updates 8
base CentOS-6 - Base 6,706
extras CentOS-6 - Extras 45
mysql-connectors-community MySQL Connectors Community 36
mysql-tools-community MySQL Tools Community 47
mysql56-community MySQL 5.6 Community Server 358
updates CentOS-6 - Updates 358
repolist: 7,704
[root@repository ~]#
... View more
06-17-2017
03:41 AM
[root@repository ~]# yum repolist
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: centos.exabytes.com.my
* extras: centos.exabytes.com.my
* updates: centos.exabytes.com.my
http://centos/yum/HDP/centos6/2.x/updates/2.1.10.0/repodata/repomd.xml: [Errno 14] PYCURL ERROR 7 - "couldn't connect to host"
Trying other mirror.
http://centos/yum/HDP-UTILS-1.1.0.19/repos/centos6/repodata/repomd.xml: [Errno 14] PYCURL ERROR 7 - "couldn't connect to host"
Trying other mirror.
http://centos/yum/AMBARI-2.2.2.0/centos6/2.2.2.0-460/repodata/repomd.xml: [Errno 14] PYCURL ERROR 7 - "couldn't connect to host"
Trying other mirror.
repo id repo name status
HDP-2.1 HDP-2.1 98
HDP-UTILS-1.1.0.19 HDP-UTILS-1.1.0.19 48
Updates-ambari-2.2.0.0 ambari-2.2.0.0 - Updates 8
base CentOS-6 - Base 6,706
extras CentOS-6 - Extras 45
mysql-connectors-community MySQL Connectors Community 36
mysql-tools-community MySQL Tools Community 47
mysql56-community MySQL 5.6 Community Server 358
updates CentOS-6 - Updates 358
repolist: 7,704
[root@repository ~]# yum repolist
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: centos.exabytes.com.my
* extras: centos.exabytes.com.my
* updates: centos.exabytes.com.my
HDP-2.1 | 2.9 kB 00:00
HDP-UTILS-1.1.0.19 | 2.9 kB 00:00
Updates-ambari-2.2.0.0 | 2.9 kB 00:00
repo id repo name status
HDP-2.1 HDP-2.1 98
HDP-UTILS-1.1.0.19 HDP-UTILS-1.1.0.19 48
Updates-ambari-2.2.0.0 ambari-2.2.0.0 - Updates 8
base CentOS-6 - Base 6,706
extras CentOS-6 - Extras 45
mysql-connectors-community MySQL Connectors Community 36
mysql-tools-community MySQL Tools Community 47
mysql56-community MySQL 5.6 Community Server 358
updates CentOS-6 - Updates 358
repolist: 7,704
[root@repository ~]#
... View more
Labels:
06-17-2017
03:38 AM
please try following , it may be help full 1) download the mysql connector jar and keep in this location /usr/share/java/mysql-connector-java.jar. 2) Mysql client should be installed on Ambari-server, if their only you installed the MYsql not required. 3) Verify MySQL connector working fine, like username and password, from the Ambari-server , please check as below. Crosschecking the mysql connector working fine or not on ambari-server
[root@]#/usr/lib/hive/bin/schematool -initSchema -dbType mysql -userName hive -passWord hive
Metastore connection URL:jdbc:mysql://centos2.test.com/hive?createDatabaseIfNotExist true
Metastore Connection Driver : com.mysql.jdbc.Driver
Metastore connection User: hive
Starting metastore schema initialization to 0.13.0
Initialization script
hive-schema-0.13.0.mysql.sql
Initialization script completed
schemaTool completeted
[root@centos ~]#
... View more