Member since
05-30-2016
27
Posts
5
Kudos Received
0
Solutions
07-08-2019
08:21 PM
My linux box timezone is in GMT and when I query the tables, it shows correct tz in phoenix (sqlline). When I remotely connect and query the same tables, the timestamp data converts to PST. When I say remotely connect, meaning I am using aqua data studio to connect and the same issue occurs when I connect via Application. I tried to add the property in hbase-site.xml, but there is no luck. Can you please let me know what do I need to do to get the exact data stored in phoenix/hbase table without tz conversion.
... View more
Labels:
10-18-2017
10:43 AM
I have updated the system catalog table in phoenix to modify the column size of one of the tables I had created earlier. I have restarted the cluster but the Phoenix client is not picking up the metadata? My requirement is to modify a column data size from 500 to 2000. It's a varchar column
... View more
Labels:
01-07-2017
10:44 AM
I am using a our own unix instances on AWS which is not exactly EC2 type. I have installed ambari.
Can you please let me know the steps to enable it.
... View more
01-05-2017
01:07 PM
1 Kudo
Is there any tool available like "Chaos Monkey" to use in Ambari cluster setup.
I am trying to test the HA. What is the best way to have it tested in my cluster?
... View more
12-23-2016
07:16 AM
@Jay SenSharma As I mentioned above that's the error I get in Aqua Data Stdio console. Apart from that there is nothing getting logged.
... View more
12-23-2016
07:00 AM
@Jay SenSharma I just checked from Ambari Metrics/Configs/Metric Collector/Advanced ams-hbase-site/phoenix.spool.directory it's: ${hbase.tmp.dir}/phoenix-spool In Aqua Data Stufio, I have the jar files: hbase-client.jar;phoenix-client.jar;hbase-client-1.1.2.2.4.2.12-1.jar;phoenix-4.9.0-HBase-1.2-client.jar
... View more
12-23-2016
06:52 AM
2 Kudos
I have set up a cluster in Ambari. I am using Aqua Data Studio to connect hbase using "org.apache.phoenix.jdbc.PhoenixDriver" drive to connect through ZK nodes. I can run normal queries on hbase tables, but it throws an error while running any aggregate function like max()/count() >[Error] Script lines: 6-10 -------------------------
org.apache.phoenix.exception.PhoenixIOException: The system cannot find the path specified
[Executed: 23/12/2016 11:47:19 AM] [Execution: 28ms] What I need to do to fix this issue?
... View more
Labels:
12-20-2016
12:44 PM
At present I am using phoenix to add dynamic column. I don't have any issue in adding them. If I have to scan the entire table data to find out no. of qualifiers for each row, I need to do it for each individual queries.
... View more
12-20-2016
11:57 AM
I have a requirement to design a table with defined column families but not consistent qualifiers at the beginning. For each records, it might vary. I gave a try to store the varying qualifiers into a Json object and then traverse through it . But the performance gone down when try to fetch for a specific data from a particular qualifier. And moreover I need to break the query into two, first fetch how many qualifiers I have, based on that I need to pass the actual select query. Please let me know how can I get this done in HBASE? The example could be like: Type of queries I want to achieve are: 1. select count(*) from table where one of the Q= 'REW' (How do I know how many qualifiers are there for each row) 2. select first and last qualifier data for keys in (1231, 321)
... View more
Labels:
12-18-2016
03:17 PM
I have created a table with few CF and initial list of qualifiers. Later on while UPSERTing data into the table with dynamic qualifiers, it was successfully added. But I am not able to view them anymore. --------------------------------------------------------------------------------------------------------------------------------------------------------------------- CREATE TABLE IF NOT EXISTS EMPDETAIL (id bigint not null, name varchar(255) not null, age bigint, address.addr1 varchar(1000), address.addr2 varchar(1000), address.addr3 varchar(1000),
CONSTRAINT EMPDETAIL_PK PRIMARY KEY (id,name)) UPSERT INTO EMPDETAIL (id, name, age, address.addr1, address.addr2, address.addr3) VALUES (1, 'Test1', 20, 'testaddr1', 'testaddr2', 'testaddr3') UPSERT INTO EMPDETAIL (id, name, age, address.addr1, address.addr2, address.addr3, address.addr4 varchar(1000)) VALUES (2, 'Test2', 220, 'testaddr11', 'testaddr22', 'testaddr33', 'testaddr44') UPSERT INTO EMPDETAIL (id, name, age, address.addr1, address.addr2, address.addr3, address.addr4 varchar(1000)) VALUES (3, 'Test3', 330, 'testaddr33', 'testaddr33', 'testaddr333', 'testaddr444') --------------------------------------------------------------------------------------------------------------------------------------------------------------------- If I execute "SELECT * FROM EMPDETAIL", it doesn't return address.add4. I am using "Aqua Data Studio" to connect to Hbase using Phoenix & Hbase client jars.
... View more
Labels:
12-18-2016
11:46 AM
Hi @Randy Gelhausen, @Ryan Cicak Thanks for your reply.
But my requirement is slightly different. Let me give you an example here. I have to store the complete journey details when I plan to travel from Mumbai to NY. Now for each person it will be different. I may want to travel from MUM-DUB-NY, another person could travel via MUM-DEL-LONDON-NY etc. So for me the column family ('stoppages') will contain 3 qualifier but it's 4 for the other person. Itinerary1: CF1.stop1= MUM, CF1.stop2=DUB,CF1.stop3=NY Itinerary1: CF1.stop1= MUM, CF1.stop2=DEL,CF1.stop3=LONDON,CF1.stop4=NY Now the question is, how would I know for each records how many qualifiers are available, unless I query like select * from <table> where key= <key>. What if I want to fetch the details of the itineraries whose last stoppage is= NY.
... View more
12-17-2016
06:36 PM
Is it possible to ad index on the views? Also, I want to add a table with defined column families and few qualifiers. My requirement is to store the data with qualifiers at runtime. How can I make sure to fetch the valid qualifiers while fetching particular record?
... View more
12-17-2016
05:23 PM
1 Kudo
I am designing a table in hbase using phoenix. Can I define a column family at the beginning and then add column qualifiers at run time?
Also, when I query the table how would I know what are the column qualifiers I have in that table? Do I need to keep a manual metadata table to keep a track of the column qualifiers? It's difficult in that case as for every record the qualifiers might differ.
... View more
Labels:
12-15-2016
06:12 AM
How to proceed with the load testing on HBASE Cluster? How can I spwan concurrent threads/request for both data load and extract data?
... View more
Labels:
12-14-2016
07:50 PM
I have a setup of Ambari installation with 1 active namenode, 1 secondary namenode and 5 datanodes to use HBASE primarily. Since this is a brand new cluster setup, we are not sure to with the default setting or not.
We do have one use case, but for the data system setup what are the factors we should look upon to finalize the various parameters?
e.g.: Load testing -- How shall I proceed with it? What should be our compaction parameters (shortCompactions as well)? What should be the initial config for memstore flushes (periodic flusher as well)? Need help on this.
... View more
Labels:
12-14-2016
11:05 AM
These are the some noticeable picks from the RS log: [MemStoreFlusher.1] regionserver.HRegion: Started memstore flush for SYSTEM.STATS,,1471982464802.1ff0385b05d60535d5217e2402753d6d., current region memstore size 1.88 KB, and 1/1 column families' memstores are being flushed.
regionserver.HRegionServer: myregionserver.com,16020,1477683211158-MemstoreFlusherChore requesting flush for region SYSTEM.STATS,,1471982464802.1ff0385b05d60535d5217e2402753d6d. after a delay of 17160
[regionserver/myregionserver.com/<IPADDR>:16020-shortCompactions-1477693224761] regionserver.HRegion: Starting compaction on 0 in region SYSTEM.STATS,,1471982464802.1ff0385b05d60535d5217e2402753d6d.
... View more
12-14-2016
06:29 AM
Do you mean the file locality can be lost at any point in time, no need to worry about it? Or I need to configure and schedule for the major compaction or memstore flushes. Is it not going to impact the live queries if I leave it just like that? At present the configs are default configs came with Amabri installations, I haven't modified anything. May be am asking too many qs, since this is my first cluster, about to go live in prod, so little worried.
... View more
12-14-2016
05:51 AM
Got it. I want to first figure out what caused the file locality to go down then take an action
... View more
12-14-2016
05:29 AM
Can you please tell me what exactly I need to verify in region server's log's so that I can find out the specific reason, either it could be any of memstore flush, region merge, split, failures or a major compaction.
I can configure accordingly, so it doesn't start by itself.
... View more
12-14-2016
04:50 AM
I have created the cluster, it's brand new cluster and no hbase operations done so far. Though the cluster has gone through several restarts.
... View more
12-13-2016
05:30 PM
In my Ambari cluster setup, in HBASE I see the Files Local percentage is fluctuating. Though the cluster is not busy at all and am not doing any bulk load or any such heavy operation. What are the things I need to verify and make changes to keep it stable and above 70%?
... View more
Labels:
12-13-2016
09:10 AM
The definition of the alert says
"This service-level alert is triggered if the NameNode heap usage deviation has grown beyond the specified threshold within a day period."
Initially I had configured "Growth Rate" 50% (CRIT) and 20% (WARN). Now I increased the CRIT value to 60%.
Is that the right way to resolve it or I need to try somewhere else?
... View more
12-13-2016
07:59 AM
Last night I loaded some heavy amount of data to my hbase cluster . When I started my cluster today morning, it started throwing the above alert. I tried to increase:
namenode_opt_newsize, namenode_opt_maxnewsize, hbase_master_heapsize, hbase_master_xmn_size, metrics_collector_heapsize in AMS, but no luck. Even I tried to 'expunge' the trash in namenode as well, but the alert still persists. How can I get rid of this alert?
... View more
Labels:
12-12-2016
01:30 PM
1 Kudo
How can I add a wizard to visually monitor the JVM for my cluster in Ambari? As an alternate I was unable to install the jVisualVM in admin node running centos6.
... View more
Labels:
12-12-2016
01:22 PM
If I create a view on top a table in phoenix, is it going to claim a physical space?
... View more
12-01-2016
12:10 PM
I am using Ambari version 2.4.1 with 8 region servers (16 VCPU). Is it ok to have 8*3= 24 buckets for my table? How to upgrade the phoenix to 4.6 where I can create a table and have row timestamp to a Phoenix column? if I create a view on top a table in phoenix, is it going to claim a physical space?
... View more
Labels: