Member since
11-14-2015
268
Posts
122
Kudos Received
29
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2539 | 08-07-2017 08:39 AM | |
3965 | 07-26-2017 06:06 AM | |
9385 | 12-30-2016 08:29 AM | |
7419 | 11-28-2016 08:08 AM | |
7012 | 11-21-2016 02:16 PM |
08-04-2017
10:15 AM
1 Kudo
You can't execute "get" on a non-string row key from a shell. You need to do it through HBase Java API. Get get = new Get(toBytes("row1")); In Phoenix case, it would be difficult you to form an exact row key from the primary key values as it involves the usage of low level APIs( PTable.newKey(),PDataType.toBytes(value, column.getSortOrder) etc). However , if you are really looking for look ups, you can still do it from sql. SELECT * FROM MyTab WHERE feature='Temp' and TS=to_time('<ts_value>')
... View more
08-02-2017
05:37 AM
It depends on what 50K users is doing. (if your cluster capacity and configuration is right, you can scale horizontally without any problem) If it is just the point lookup(key value access) then depending upon the disk(SSD/HDD) you are using, you should be able to scale without any problem, some basic configuration tweak is required, like increasing no. of handlers for datanode and regionserver, block cache/bucket cache etc If you are doing heavy scans then you may need a large cluster which can bear this load. Network, CPU and disk will play an important role.
... View more
08-02-2017
05:24 AM
You can create a Schema(which is similar to databases) by using following grammar. https://phoenix.apache.org/language/index.html#create_schema Schema will be mapped to a namespace in HBase so your tables can be segregated logically as well as physically. https://phoenix.apache.org/namspace_mapping.html
... View more
08-01-2017
09:01 AM
It seems you are using column mapping feature(Though it is default from 4.11), which encodes column name in some encoded form of qualifier while storing in hbase , this is to save space and for better performance. Currently, we don't provide any API(as there is no standard JDBC API which exposes the details of the storage) to give you such mapping. if your application requires using the HBase qualifier directly, then I would suggest creating a table having column encoding disabled so that the column name will be used for HBase qualifier as well. CREATE TABLE test
( tag varchar(10) NOT NULL,
ts DATE NOT NULL,
val INTEGER CONSTRAINT pk PRIMARY KEY (tag, ts)
)COLUMN_ENCODED_BYTES=0;
... View more
08-01-2017
08:36 AM
for the second question:- bq. Also, I want to add a table with defined column families and few qualifiers. My requirement is to store the data with qualifiers at runtime. How can I make sure to fetch the valid qualifiers while fetching particular record? You can use dynamic columns while upserting data and doing SELECT. https://phoenix.apache.org/dynamic_columns.html
... View more
07-28-2017
05:44 AM
Why SquirrelSQL is not able to make a connection to HBase cluster is not clear from the stack trace. would you mind enabling debugging and paste the logs here. have you tried connecting using sqlline with the same url from the same machine.
... View more
07-27-2017
10:11 AM
I'll not recommend updating a single component in a stack, as there will be some incompatibility and upgrade issues. I only suspect you might be hitting PHOENIX-2169 so it's better to reproduce the same issue somewhere in pre-prod/dev and see if the fix works and then replicate on production.
Or, you can talk to your vendor to provide the hotfix for the same version.
... View more
07-26-2017
10:43 AM
You need to move your snapshot from exported directory to the directory where hbase looks for snapshots(.hbase-snapshot). so that list_snapshots and other commands(like clone_snapshot ) can work easily hadoop dfs -mv hdfs://NAME_NODE:8020/hbase/.hbase-snapshot/<snapshot_name> hdfs://NAME_NODE:8020/apps/hbase/data/.hbase-snapshot/
hdfs dfs -mv hdfs://NAME_NODE:8020/hbase/archive/data/* hdfs://NAME_NODE:8020/apps/hbase/data/archive/data/ FYI:- to list snapshot directly from remote directory hbase org.apache.hadoop.hbase.snapshot.SnapshotInfo -remote-dir hdfs://NAME_NODE:8020/hbase/ -list-snapshots
... View more
07-26-2017
09:34 AM
1 Kudo
you can't authenticate using username and password in Phoenix/HBase/HDFS, the only way is to go through kerberos authentication.
... View more
07-26-2017
06:06 AM
It may be due to https://issues.apache.org/jira/browse/PHOENIX-2169 bq. It's odd to notice that sometimes I am able to upsert 50k register in a query (a few days), and sometimes I am limited to 9k registers (around 2 days) or less It may happen irregularly when UPSERT SELECT running scans and mutating parallely as our ProjectedColumnExpression is not thread safe. so you may try backporting PHOENIX-2169 in your distribution or upgrade to HDP 2.5 or PHOENIX-4.7.
... View more