Member since
02-19-2016
158
Posts
69
Kudos Received
24
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1284 | 11-29-2017 08:47 PM | |
1695 | 10-24-2017 06:37 PM | |
17169 | 08-04-2017 06:58 PM | |
1836 | 05-15-2017 06:42 PM | |
2237 | 03-27-2017 06:36 PM |
08-09-2016
07:21 PM
@Michel Sumbul you may find all functions here https://phoenix.apache.org/language/functions.html and of course you may use 'like' keyword, so it will be as a regular sql query: select name, phone from table1 where name like '%Smith%'
... View more
07-29-2016
07:14 PM
This implementation is using kerberos ticket cache, so you need to use kinit before running client. No changes in the URL is required. Also make sure that the hbase-site.xml which is in the classpath of the client has hbase.security.authentication set to kerberos
... View more
07-29-2016
05:39 AM
@Sunile Manjee PQS works like most of others hadoop services. It requires its own keytab and principal to access hdfs/hbase. (please note that documentation says Configurations relating to server connecting to a secure cluster.) Once you configured PQS kerberos keytab/principal any client may work with PQS without restrictions. No client auth is supported at the moment. It will be added in HDP 2.5 release. More details can be found at https://issues.apache.org/jira/browse/PHOENIX-2792
... View more
07-26-2016
07:18 AM
1 Kudo
Currently there is no backup functionality specific for Phoenix. Meanwhile you may use HBase snapshots. You need to snapshot SYSTEM tables as well as user tables. In the next release of HDP there will be HBase backup functionality. And hopefully quite soon we will see Phoenix backup based on it.
... View more
07-25-2016
11:32 PM
1 Kudo
Those parameters are for the QueryServer configuration. They need to be added to the hbase-site.xml, so Query Server will be able to connect to the HBase. As far as I know there is no kerberos configuration parameters in the jdbc connection string for PQS
... View more
06-23-2016
08:36 PM
3 Kudos
The problem is in missing colons in the URLs you tried. It's supposed to be "zk1-titanu:2181:/hbase-unsecure"
... View more
06-22-2016
07:17 PM
Could you please explain the use case you have? Do you plan to query this data using HBase or Phoenix? In Phoenix case you just use the regular sql statements via jdbc driver. For HBase you need to handle everything yourself. So, looking for a specific records you need to run something like get 'JOURNEY_OFICINA_HBASE', '01982016-06-01 00:00:00 ' You need those trailing whitespaces since you are using the fixed size types. So the whole length of the rowkey should be exactly 25 symbols.
... View more
06-22-2016
07:05 PM
I would suggest to set HADOOP_CLASSPATH=/path/to/hbase-protocol.jar:/path/to/hbase/conf as the official documentation says. More details can be found at https://phoenix.apache.org/bulk_dataload.html CSV bulk load is using the regular HBase routine to load hfiles. So the actual hbase-site.xml is required in the classpath.
... View more
06-22-2016
06:10 PM
2 Kudos
You don't need to modify client jar. As @Ted Yu mentioned one of the reasons may be missing hbase-site.xml in the classpath. Also you may provid the proper zk parent in the connection URL: jdbc:phoenix:quorum:2181:/hbase-unsecure It would be nice if you give a bit more information how you connect the client.
... View more
06-14-2016
07:40 AM
1 Kudo
It seems that PQS is already running with the pid 6896. You may try to use it just running thin client.
... View more
- « Previous
- Next »