Member since
04-22-2016
931
Posts
46
Kudos Received
26
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
349 | 10-11-2018 01:38 AM | |
508 | 09-26-2018 02:24 AM | |
445 | 06-29-2018 02:35 PM | |
686 | 06-29-2018 02:34 PM | |
1491 | 06-20-2018 04:30 PM |
05-28-2019
09:51 PM
so there will be two metastores? how can I "connect it to my cluster" using hive-site.xml ?
... View more
05-28-2019
07:51 PM
with HDP 3.0 it install hive 3.1 but i want hive 2.3 , is it possible to replace hive3.1 with hive2.3 ? if yes how ?
... View more
05-22-2019
07:21 PM
In installing hive if I use the driver that came with HDP the DB connection test fails , but if I use the older driver I have(don't know the version) the check succeeds . [root@hadoop1 resources]# pwd /var/lib/ambari-server/resources [root@hadoop1 resources]# ls -al mysql-connector-java.jar lrwxrwxrwx. 1 root root 64 May 22 14:56 mysql-connector-java.jar -> /var/lib/ambari-server/resources/mysql-connector-java-8.0.15.jar [root@hadoop1 resources]#
... View more
Labels:
05-22-2019
02:06 PM
can someone please help? I reinstalled both hive and spark2 but still its complaining
... View more
05-22-2019
03:01 AM
hiveserver2 is started and following properties are set hadoop.proxyuser.hdfs.groups
*
hadoop.proxyuser.hdfs.hosts
*
hadoop.proxyuser.hive.groups
*
hadoop.proxyuser.hive.hosts
*
hadoop.proxyuser.livy.groups
*
hadoop.proxyuser.livy.hosts
*
hadoop.proxyuser.root.groups
*
hadoop.proxyuser.root.hosts
*
... View more
05-22-2019
02:43 AM
I just installed HDP 3.1 but hive server is showing the following alert Connection failed on host hadoop2.tolls.dot.state.fl.us:10000 (Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/alerts/alert_hive_thrift_port.py", line 204, in execute
ldap_password=ldap_password)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/hive_check.py", line 84, in check_thrift_port_sasl
timeout_kill_strategy=TerminateStrategy.KILL_PROCESS_TREE,
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
returns=self.resource.returns)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
raise ExecutionFailed(err_msg, code, out, err)
ExecutionFailed: Execution of 'beeline -n hive -u 'jdbc:hive2://hadoop2.tolls.dot.state.fl.us:10000/;transportMode=binary' -e ';' 2>&1 | awk '{print}' | grep -i -e 'Connected to:' -e 'Transaction isolation:'' returned 1.
)
... View more
05-15-2019
03:00 PM
I have hive version 1.2.1 running under HDP2.6 , i need to go to hive 2.3 , is it possible ? what would be the steps. or can i get the latest version of hive in HDP higher releases? please advise
... View more
05-10-2019
06:46 PM
: FINISHED 2019-05-10 14:23:13,482 [232a3e6e-06be-9256-92ee-bf27f13b643c:foreman] INFO o.a.drill.exec.work.foreman.Foreman - Query text for query with id 232a3e6e-06be-9256-92ee-bf27f13b643c issued by anonymous: show tables 2019-05-10 14:23:13,842 [232a3e6e-06be-9256-92ee-bf27f13b643c:frag:0:0] WARN o.a.d.e.s.h.c.TableNameCacheLoader - Failure while attempting to get hive tables. Retries once. org.apache.hadoop.hive.metastore.api.MetaException: Got exception: org.apache.thrift.TApplicationException Invalid method name: 'get_tables_by_type' at org.apache.hadoop.hive.metastore.MetaStoreUtils.logAndThrowMetaException(MetaStoreUtils.java:1382) ~[drill-hive-exec-shaded-1.16.0.jar:1.16.0] at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTables(HiveMetaStoreClient.java:1405) ~[drill-hive-exec-shaded-1.16.0.jar:1.16.0] at org.apache.drill.exec.store.hive.client.TableNameCacheLoader.load(TableNameCacheLoader.java:59) [drill-storage-hive-core-1.16.0.jar:1.16.0] at org.apache.drill.exec.store.hive.client.TableNameCacheLoader.load(TableNameCacheLoader.java:41) [drill-storage-hive-core-1.16.0.jar:1.16.0] at org.apache.drill.shaded.guava.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3708) [drill-shaded-guava-23.0.ja
... View more
05-10-2019
06:38 PM
. org.apache.hadoop.hive.metastore.api.MetaException: Got exception: org.apache.thrift.TApplicationException Invalid method name: 'get_tables_by_type' at org.apache.hadoop.hive.metastore.MetaStoreUtils.logAndThrowMetaException(MetaStoreUtils.java:1382) ~[drill-hive-exec-shaded-1.16.0.jar:1.16.0] at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTables(HiveMetaStoreClient.java:1405) ~[drill-hive-exec-shaded-1.16.0.jar:1.16.0]
... View more
Labels:
05-01-2019
09:30 PM
I am trying to install apache drill 1.6 with HDP2.6 using the following GitHub rep but its not giving me any hosts to choose in "Assign slave" window, and when I click next it fails. is there a latest GitHub repository then the one below ? https://github.com/dvergari/ambari-drill-service
... View more
04-03-2019
08:01 PM
is it supported now in latest phoenix releases ? if yes any article or examples available ?
... View more
04-01-2019
07:31 PM
for a value of 99 , the hbase shows the key value as "\x80\x00\x00c" and for value of 1 in phoenix hbase shows it as "\x80\x00\x00\x01" how are these values generated? hex of dec 1 is 1 but what is "\x80"? also hex for dec 99 is 63 so how we get "\x00c" ?
... View more
02-28-2019
07:29 PM
I am loading a phoenix table but in hbase this table row key is shown as hex values. how can I correct this to see the ASCII values ? that is the integer value
sqlline version 1.1.8
0: jdbc:phoenix:localhost:2181:/hbase-unsecur> SELECT * FROM EXAMPLE;
+--------+-------------+------------+
| MY_PK | FIRST_NAME | LAST_NAME |
+--------+-------------+------------+
| 12345 | John | Doe |
| 67890 | Mary | Jane |
+--------+-------------+------------+
2 rows selected (0.053 seconds)
Version 1.1.2.2.6.5.0-292, r897822d4dd5956ca186974c10382e9094683fa29, Fri May 11 08:01:42 UTC 2018
hbase(main):001:0> scan 'EXAMPLE'ROW COLUMN+CELL \x80\x00\x00\x00\x00\x0009 column=M:FIRST_NAME, timestamp=1551381906272, value=John \x80\x00\x00\x00\x00\x0009 column=M:LAST_NAME, timestamp=1551381906272, value=Doe \x80\x00\x00\x00\x00\x0009 column=M:_0, timestamp=1551381906272, value=x \x80\x00\x00\x00\x00\x01\x092 column=M:FIRST_NAME, timestamp=1551381906272, value=Mary \x80\x00\x00\x00\x00\x01\x092 column=M:LAST_NAME, timestamp=1551381906272, value=Jane \x80\x00\x00\x00\x00\x01\x092 column=M:_0, timestamp=1551381906272, value=x2 row(s) in 0.2030 seconds
... View more
02-28-2019
03:41 PM
i am following this GitHub link but I can't find the JDBC driver and also the elasticserch-plugin command does not recognize the syntax given. can someone point out the latest related document to work with hbase 1.1.2, phoenix 4.7, Elasticsearch 6.x ? http://lessc0de.github.io/connecting_hbase_to_elasticsearch.html
... View more
Labels:
02-26-2019
04:05 PM
so if the hbase-site.xml directory is in the classpath then no connection information needs to be given in the java code?
... View more
02-26-2019
03:41 PM
I am not specifying anything about the hbase server or the zookeeper but this code works fine and hbase table gets created , how does the code know where to connect to hbase server? [root@hadoop1 ~]# more TestHbaseTable.java import java.io.IOException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.client.HBaseAdmin; public class TestHbaseTable { public static void main(String[] args) throws IOException { HBaseConfiguration hconfig = new HBaseConfiguration(new Configuration()); HTableDescriptor htable = new HTableDescriptor("User"); htable.addFamily( new HColumnDescriptor("Id")); htable.addFamily( new HColumnDescriptor("Name")); System.out.println( "Connecting..." ); HBaseAdmin hbase_admin = new HBaseAdmin( hconfig ); System.out.println( "Creating Table..." ); hbase_admin.createTable( htable ); System.out.println("Done!"); } }
... View more
02-25-2019
08:55 PM
I have Hadoop classpath defined but when I use it as per documentation I am getting error [root@hadoop1 conf]# hadoop classpath
/usr/hdp/2.6.5.0-292/hadoop/conf:/usr/hdp/2.6.5.0-292/hadoop/lib/*:/usr/hdp/2.6.5.0-292/hadoop/.//*:/usr/hdp/2.6.5.0-292/hadoop-hdfs/./:/usr/hdp/2.6.5.0-292/hadoop-hdfs/lib/*:/usr/hdp/2.6.5.0-292/hadoop-hdfs/.//*:/usr/hdp/2.6.5.0-292/hadoop-yarn/lib/*:/usr/hdp/2.6.5.0-292/hadoop-yarn/.//*:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/lib/*:/usr/hdp/2.6.5.0-292/hadoop-mapreduce/.//*:/usr/jdk64/jdk1.8.0_112/lib/tools.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java.jar:ojdbc6.jar:/usr/hdp/2.6.5.0-292/tez/*:/usr/hdp/2.6.5.0-292/tez/lib/*:/usr/hdp/2.6.5.0-292/tez/conf
[root@hadoop1 conf]#
[root@hadoop1 conf]#
[root@hadoop1 conf]# echo $JAVA_HOME
/usr/jdk64/jdk1.8.0_112
[root@hadoop1 conf]#
[root@hadoop1 conf]# javac -cp `hadoop classpath` TestHbaseTable.java
TestHbaseTable.java:3: error: package org.apache.hadoop.hbase does not exist
import org.apache.hadoop.hbase.HBaseConfiguration;
^
TestHbaseTable.java:4: error: package org.apache.hadoop.hbase does not exist
import org.apache.hadoop.hbase.HColumnDescriptor;
^
TestHbaseTable.java:5: error: package org.apache.hadoop.hbase does not exist
import org.apache.hadoop.hbase.HTableDescriptor;
^
TestHbaseTable.java:6: error: package org.apache.hadoop.hbase.client does not exist
import org.apache.hadoop.hbase.client.HBaseAdmin;
^
TestHbaseTable.java:12: error: cannot find symbol
HBaseConfiguration hconfig = new HBaseConfiguration(new Configuration());
^
symbol: class HBaseConfiguration
location: class TestHbaseTable
TestHbaseTable.java:12: error: cannot find symbol
HBaseConfiguration hconfig = new HBaseConfiguration(new Configuration());
^
... View more
Labels:
02-25-2019
07:19 PM
I am using HDP2.6, hbase 1.1.2 python 3.3.6 [root@hadoop1 conf]# /usr/hdp/current/hbase-master/bin/hbase-daemon.sh start thrift -p 9090 --infoport 9091starting thrift, logging to /var/log/hbase/hbase-root-thrift-hadoop1.out
>>> import happybase
>>> c = happybase.Connection('hadoop1',9090, autoconnect=False)
>>> c.open()
>>> print(c.tables())
Traceback (most recent call last):File "<stdin>", line 1, in <module>File "/usr/local/lib/python3.6/site-packages/happybase/connection.py", line 242, in tablesnames = self.client.getTableNames()File "/usr/local/lib/python3.6/site-packages/thriftpy/thrift.py", line 198, in _reqreturn self._recv(_api)File "/usr/local/lib/python3.6/site-packages/thriftpy/thrift.py", line 210, in _recvfname, mtype, rseqid = self._iprot.read_message_begin()File "thriftpy/protocol/cybin/cybin.pyx", line 439, in cybin.TCyBinaryProtocol.read_message_begin (thriftpy/protocol/cybin/cybin.c:6470)cybin.ProtocolError: No protocol version header>>>
... View more
02-25-2019
04:52 PM
hi Josh how can I do it via 'get' command ? since we will be using java to get these rows
... View more
02-25-2019
04:05 PM
I can select a particular version from a table , but how can I select a range of versions ? e.g say I want to select all the rows from timestamp 1551108073557 to 1551108018724 ? hbase(main):005:0> scan 'PERSON',{NAME => 'student',VERSIONS => 5} ROW COLUMN+CELL r1 column=student:student_name, timestamp=1551108086986, value=John r1 column=student:student_name, timestamp=1551108073557, value=Mary r1 column=student:student_name, timestamp=1551108037609, value=Nancy r1 column=student:student_name, timestamp=1551108018724, value=Ram r1 column=student:student_name, timestamp=1551108002231, value=Sam 1 row(s) in 0.0190 seconds hbase(main):006:0> hbase(main):007:0* hbase(main):008:0* get 'PERSON','r1',{COLUMN => 'student', TIMESTAMP => 1551108018724} COLUMN CELL student:student_name timestamp=1551108018724, value=Ram
... View more
02-20-2019
07:44 PM
how do I know if the stats were collected for a portioned table ? describe formatted doesn't show it .
... View more
02-18-2019
10:17 PM
please see the attached screen shots of a job running in our 7 data node cluster . Looking from the job do you see anything that we can tune or doesn't seem right ? the jobs takes a long time to run even though we have a powerful cluster . TEZ.JPG
... View more
Labels:
12-07-2018
03:58 AM
without putting a trigger or extra column on the source table
... View more
11-30-2018
04:52 AM
how can we bring updates into hive without loading the complete table and checking for the differences ?the incremental append will only look for a date column for newly added records and bring those records only.we can always create a trigger on the source table to create an additional record on any updates but whats the solution without having to create a trigger ? e.g lets say we have a table TEST (ID int , NAME string, LOGIN_TIME timesteamp)
we do incremental appends using LOGIN_TIME colum so all new records will get moved to hive
but what if someone modifies the NAME column ? how will sqoop pick it up ?
... View more
11-27-2018
04:08 PM
after starting the ldap server for hadoop i am getting different error now, also i had to replace the clustername "ftehdp" with "default" ? curl -i -k -u admin:admin-password -X GET 'https://localhost:8443/gateway/default/webhdfs/v1/tmp/uname.txt?op=OPEN'
HTTP/1.1 307 Temporary Redirect
Date: Tue, 27 Nov 2018 16:21:44 GMT
Set-Cookie: JSESSIONID=1219u2f8zreb11eu9fuxlggxhq;Path=/gateway/default;Secure;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Set-Cookie: rememberMe=deleteMe; Path=/gateway/default; Max-Age=0; Expires=Mon, 26-Nov-2018 16:21:44 GMT
Cache-Control: no-cache
Expires: Tue, 27 Nov 2018 16:21:44 GMT
Date: Tue, 27 Nov 2018 16:21:44 GMT
Pragma: no-cache
Expires: Tue, 27 Nov 2018 16:21:44 GMT
Date: Tue, 27 Nov 2018 16:21:44 GMT
Pragma: no-cache
X-FRAME-OPTIONS: SAMEORIGIN
Location: https://hadoop1:8443/gateway/default/webhdfs/data/v1/webhdfs/v1/tmp/uname.txt?_=AAAACAAAABAAAACgLvtILkFAljr5PIP7MVSOAump8j0kSwFCPdGCP2R_b1tCZ0V2KGOQuiRiI4_IU7GDG6NqRtK2Vu7DOZeOhbuQUaP1FYtD_-IV3P-VXMbOFbPfbwpNseAuN-RyQduRm5S1mrk0GVbYKQg4NscgsoF0GGsvqKDyPtECwhwkX96E37Jc5_yCnlkw3LVKUY41Hg6LOt96W8-3rTmnrbo7o26dOcpPv1_uv4Q1F18b4yk5N5BNf6HTZdVZ6Q
Content-Type: application/octet-stream
Server: Jetty(6.1.26.hwx)
Content-Length: 0
... View more