Member since
10-05-2015
105
Posts
83
Kudos Received
25
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
583 | 07-13-2017 09:13 AM | |
710 | 07-11-2017 10:23 AM | |
311 | 07-10-2017 10:43 AM | |
1711 | 03-23-2017 10:32 AM | |
1837 | 03-23-2017 10:04 AM |
02-25-2020
10:17 PM
From the stack trace the hbase:meta region itself not available means you cannot operate anything. From HBase master/regionserver logs we need to find what's the reason for hbase:meta region is not getting assigned. Would you mind sharing the logs?
... View more
07-13-2017
03:56 PM
You need to add hbase-site.xml updated with transactions related configs to application classpath.
... View more
07-13-2017
09:13 AM
You can enable it by setting "hbase.replication.bulkload.enabled" to true in hbase-site.xml. For more information you can check release notes of https://issues.apache.org/jira/browse/HBASE-13153.
... View more
07-11-2017
10:23 AM
2 Kudos
Are you creating table from phoenix sqlline or hive? When you create table from Phoenix client or sqlline then you don't need to provide all the information PhoenixStoragehandler and table properties.
... View more
07-10-2017
10:43 AM
1 Kudo
When you know the row key you can use Get. HBase should automatically convert the Scan with same start and end key as get. With the get the block reads in a HFile are positional reads(better for random reads) than seek + read(better for scanning).
... View more
06-29-2017
06:02 PM
1 Kudo
You need to create java.sql.Array by calling conn.createArrayOf with long array. For ex: Long[] longArr =new Long[2];
longArr[0] = 25l;
longArr[1] = 36l;
array = conn.createArrayOf("BIGINT", longArr);
ps.setArray(3, array);
... View more
04-17-2017
12:50 PM
Exactly they won't be reflected.
... View more
04-17-2017
12:50 PM
0: jdbc:phoenix:sandbox:2181/h> EXPLAIN SELECT * FROM TRADE.TRADE ; +----------------------------------------------------------------------------+ | PLAN | +----------------------------------------------------------------------------+ | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER TRADE:TRADE [1]| | SERVER FILTER BY FIRST KEY ONLY | Here it's going through deleted local index that means the data table PTable object is not updated after dropping the local index which is odd. When we drop the index we delete data from the local indexes but somehow the we are getting dropped local index from the meta data. The reason might be like when we drop the index dropped meta entries have very latest timestamp than 1490706922000. But when we set currentSCN=1490706922000 the meta data might be still visible.
... View more
03-30-2017
11:51 AM
Can you check whether the jar copied to local path at RegionServer? {hbase.local.dir}/jars
... View more
03-30-2017
09:29 AM
When you use the UDF in where condition it will be push down to region server. So you need to add the jar to hdfs so that the UDF will be automatically loaded at server. You can do proper configurations as below. http://phoenix.apache.org/udf.html#Configuration And upload the jar from sqlline using following query and try. add jar <local jar path>
... View more
03-23-2017
11:06 AM
At present you can remove the the stats of the view from SYSTEM.STATS with delete query. Something like this and see still getting the same issue. still if you are getting the same issue then restart the region server holding the SYSTEM.CATALOG table and close and reopen the connection. delete from SYSTEM.STATS where PHYSICAL_NAME=viewname
... View more
03-23-2017
10:54 AM
1 Kudo
Ya got it. Just combining that from raw files is not possible. Instead you need to create two tables for CDR data and CRM data and you can write MR job or java client based on data size with following steps. 1) Scan CDR table and get Bnumber 2) Call get on CRM table to get the corresponding details. 3) Prepared puts from the get call on CRM and add them to the ROW get from CDR and write the CDR. or else you can use Apache Phoenix so that you can utilise UPSERT SELECT features which simplify things. http://phoenix.apache.org/ http://phoenix.apache.org/language/index.html#upsert_select
... View more
03-23-2017
10:32 AM
2 Kudos
You can create a table with some column family of interest and run the ImportTsv on the both the files separately by specifying the Anumber field as HBASE_ROW_KEY then columns in the same Anumber will be combined into single row. Is this something you are looking? Ex: HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-VERSION.jar importtsv -Dimporttsv.columns=HBASE_ROW_KEY,cf:age,cf:day_c,cf:month_c,cf:year_c,cf:zip_code,cf:offerType,cf:offer,cf:gender -Dimporttsv.bulk.output=hdfs://storefileoutput datatsv hdfs://inputfile
HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-VERSION.jar importtsv -Dimporttsv.columns=HBASE_ROW_KEY,cf:ContractCode,cf:Aoperator,cf:Bnumber, cf:Boperator, cf:Direction, cf:Type, cf:Month, cf:Category, cf:numberOfCalls, cf:duration, cf:longitude, cf:latitude -Dimporttsv.bulk.output=hdfs://storefileoutput datatsv hdfs://inputfile
... View more
03-23-2017
10:04 AM
2 Kudos
You can increase phoenix.query.threadPoolSize to 256 or 512 based on number of machines/cores. and also increase phoenix.query.queueSize to 5000 and add hbase-site.xml to class path or export HBASE_CONF_DIR. You can refer http://phoenix.apache.org/tuning.html for more turning.
... View more
03-23-2017
08:50 AM
@Houssem Alayet is Anumber common and going to be row key for the table?
... View more
03-23-2017
07:09 AM
@Ashok Kumar BM What's your hfile block size is it defult 64 kb? If you are writing all 1 millions cells of a row at a time then better you can increase the block size. Are you using any data block encoding techniques which can improve the performance and also are you trying ROW or ROW_COL bloom filters?
... View more
03-22-2017
10:51 AM
2 Kudos
The table details and corresponding stats are stale at the Region Server so that you might be getting StaleRegionBoundaryCacheException or else you might be hitting PHOENIX-2447. Once we drop the view the stats will be invalidated and then while querying after recreation of view the new stats from the system stats table will be repopulated so you are not seeing the same issue. There are many issues related to stats are fixed in Phoenix 4.7.0. So better you can upgrade to HDP 2.4 or above if you are using older versions or Phoenix to 4.7.0 in case of own clusters.
... View more
12-30-2016
08:27 AM
After rolling back the configurations have you restarted the server?
... View more
12-30-2016
07:01 AM
You have mentioned HDP version 2.4.2 and Phoenix Version as 4.7 But HDP 2.4.2 supports Phoenix version 4.4? Can you check the versions properly. You can upgrade to HDP 2.5 to use Name space feature supported. then things should work properly.
... View more
11-29-2016
09:51 AM
@ARUN Or else can you try something similar to type convertion. A = load 'hbase://query/select col1,col2.. from TRANSACTION' using org.apache.phoenix.pig.PhoenixHBaseLoader('localhost') as (rowKey:chararray,col_a:int, col_b:double, col_c:chararray);
... View more
11-29-2016
09:40 AM
1 Kudo
Yes Export generate sequence files. If you want to import back the exported data to other HBase table in different cluster then you can use it other wise it won't help. Pig should map the data types properly. Can you try specifying the columns list than * and check. For eg: A = load 'hbase://query/select col1,col2.. from TRANSACTION' using org.apache.phoenix.pig.PhoenixHBaseLoader('localhost');
... View more
11-29-2016
09:06 AM
Better to go with @Ankit Singhal suggestion of using pig/hive or spark to export.
... View more
11-29-2016
08:58 AM
1 Kudo
Phoenix pherf utility exports details like how much time the query took etc..not the data. There is no way to export data into csv from Phoenix. Can't you use Export utility provided by HBase.
... View more
11-17-2016
08:43 AM
The index rebuild does full table scan of data table there is a chance of timeout. Can you try increasing the timeout values of below properties and retry rebuilding index. Once you add/change the configurations then you need to export HBASE_CONF_DIR or HBASE_CONF_PATH with directly having hbase-site.xml. hbase.client.scanner.timeout.period=1200000
hbase.rpc.timeout=1200000
hbase.regionserver.lease.period = 1200000
phoenix.query.timeoutMs = 600000
... View more
11-16-2016
07:01 AM
@pan bocun From the snapshot you have provided writing to index got failed at timestamp 1478692767291 that's why the indexes got disabled. They will be automatically rebuild by the Phoenix in the background. If you see the index failure taking more time then you can drop the index and recreate it.
... View more
11-07-2016
03:15 PM
5 Kudos
You can add following property with true to balance by table and restart masters. hbase.master.loadbalance.bytable
... View more
11-07-2016
11:52 AM
After creating the table you can create view like below CREATE VIEW "TESTVIEW" AS SELECT * FROM "TEST" or else if you want to map HBase table as view or read only you can just try without select query. CREATE VIEW "TEST" ( pk VARCHAR PRIMARY KEY ,"b"."message" VARCHAR )
If this solves the problem mark it as best answer.
... View more
11-07-2016
11:18 AM
1 Kudo
First you can map the table created in HBase to Phoenix with create table query as mentioned in "Mapping to an Existing HBase Table" at phoenix.apache.org and then create view on it.
... View more
11-01-2016
10:28 AM
1 Kudo
@Hanife Shaik You can pick HBase/Cassandra based on your application type of queries you are going to use in the application for dashboards/reports. Write performance is better in both HBase/Cassandra. HBase performs very well for range or point queries. You can try HBase and Phoenix combo for simplifying your application development. Also you can try Phoenix ODBC driver https://hortonworks.com/wp-content/uploads/2016/08/phoenix-ODBC-guide.pdf or .net driver https://www.nuget.org/packages/Microsoft.Phoenix.Client/
... View more