Member since
10-05-2015
105
Posts
83
Kudos Received
25
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
345 | 07-13-2017 09:13 AM | |
334 | 07-11-2017 10:23 AM | |
124 | 07-10-2017 10:43 AM | |
879 | 03-23-2017 10:32 AM | |
770 | 03-23-2017 10:04 AM |
02-25-2020
10:17 PM
From the stack trace the hbase:meta region itself not available means you cannot operate anything. From HBase master/regionserver logs we need to find what's the reason for hbase:meta region is not getting assigned. Would you mind sharing the logs?
... View more
07-13-2017
03:56 PM
You need to add hbase-site.xml updated with transactions related configs to application classpath.
... View more
07-13-2017
09:13 AM
You can enable it by setting "hbase.replication.bulkload.enabled" to true in hbase-site.xml. For more information you can check release notes of https://issues.apache.org/jira/browse/HBASE-13153.
... View more
07-11-2017
10:23 AM
2 Kudos
Are you creating table from phoenix sqlline or hive? When you create table from Phoenix client or sqlline then you don't need to provide all the information PhoenixStoragehandler and table properties.
... View more
07-10-2017
10:43 AM
1 Kudo
When you know the row key you can use Get. HBase should automatically convert the Scan with same start and end key as get. With the get the block reads in a HFile are positional reads(better for random reads) than seek + read(better for scanning).
... View more
06-29-2017
06:02 PM
1 Kudo
You need to create java.sql.Array by calling conn.createArrayOf with long array. For ex: Long[] longArr =new Long[2];
longArr[0] = 25l;
longArr[1] = 36l;
array = conn.createArrayOf("BIGINT", longArr);
ps.setArray(3, array);
... View more
04-17-2017
12:50 PM
Exactly they won't be reflected.
... View more
04-17-2017
12:50 PM
0: jdbc:phoenix:sandbox:2181/h> EXPLAIN SELECT * FROM TRADE.TRADE ; +----------------------------------------------------------------------------+ | PLAN | +----------------------------------------------------------------------------+ | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER TRADE:TRADE [1]| | SERVER FILTER BY FIRST KEY ONLY | Here it's going through deleted local index that means the data table PTable object is not updated after dropping the local index which is odd. When we drop the index we delete data from the local indexes but somehow the we are getting dropped local index from the meta data. The reason might be like when we drop the index dropped meta entries have very latest timestamp than 1490706922000. But when we set currentSCN=1490706922000 the meta data might be still visible.
... View more
03-30-2017
11:51 AM
Can you check whether the jar copied to local path at RegionServer? {hbase.local.dir}/jars
... View more
03-30-2017
09:29 AM
When you use the UDF in where condition it will be push down to region server. So you need to add the jar to hdfs so that the UDF will be automatically loaded at server. You can do proper configurations as below. http://phoenix.apache.org/udf.html#Configuration And upload the jar from sqlline using following query and try. add jar <local jar path>
... View more
03-23-2017
11:06 AM
At present you can remove the the stats of the view from SYSTEM.STATS with delete query. Something like this and see still getting the same issue. still if you are getting the same issue then restart the region server holding the SYSTEM.CATALOG table and close and reopen the connection. delete from SYSTEM.STATS where PHYSICAL_NAME=viewname
... View more
03-23-2017
10:54 AM
1 Kudo
Ya got it. Just combining that from raw files is not possible. Instead you need to create two tables for CDR data and CRM data and you can write MR job or java client based on data size with following steps. 1) Scan CDR table and get Bnumber 2) Call get on CRM table to get the corresponding details. 3) Prepared puts from the get call on CRM and add them to the ROW get from CDR and write the CDR. or else you can use Apache Phoenix so that you can utilise UPSERT SELECT features which simplify things. http://phoenix.apache.org/ http://phoenix.apache.org/language/index.html#upsert_select
... View more
03-23-2017
10:32 AM
2 Kudos
You can create a table with some column family of interest and run the ImportTsv on the both the files separately by specifying the Anumber field as HBASE_ROW_KEY then columns in the same Anumber will be combined into single row. Is this something you are looking? Ex: HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-VERSION.jar importtsv -Dimporttsv.columns=HBASE_ROW_KEY,cf:age,cf:day_c,cf:month_c,cf:year_c,cf:zip_code,cf:offerType,cf:offer,cf:gender -Dimporttsv.bulk.output=hdfs://storefileoutput datatsv hdfs://inputfile
HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-VERSION.jar importtsv -Dimporttsv.columns=HBASE_ROW_KEY,cf:ContractCode,cf:Aoperator,cf:Bnumber, cf:Boperator, cf:Direction, cf:Type, cf:Month, cf:Category, cf:numberOfCalls, cf:duration, cf:longitude, cf:latitude -Dimporttsv.bulk.output=hdfs://storefileoutput datatsv hdfs://inputfile
... View more
03-23-2017
10:04 AM
2 Kudos
You can increase phoenix.query.threadPoolSize to 256 or 512 based on number of machines/cores. and also increase phoenix.query.queueSize to 5000 and add hbase-site.xml to class path or export HBASE_CONF_DIR. You can refer http://phoenix.apache.org/tuning.html for more turning.
... View more
03-23-2017
08:50 AM
@Houssem Alayet is Anumber common and going to be row key for the table?
... View more
03-23-2017
07:09 AM
@Ashok Kumar BM What's your hfile block size is it defult 64 kb? If you are writing all 1 millions cells of a row at a time then better you can increase the block size. Are you using any data block encoding techniques which can improve the performance and also are you trying ROW or ROW_COL bloom filters?
... View more
03-22-2017
10:51 AM
2 Kudos
The table details and corresponding stats are stale at the Region Server so that you might be getting StaleRegionBoundaryCacheException or else you might be hitting PHOENIX-2447. Once we drop the view the stats will be invalidated and then while querying after recreation of view the new stats from the system stats table will be repopulated so you are not seeing the same issue. There are many issues related to stats are fixed in Phoenix 4.7.0. So better you can upgrade to HDP 2.4 or above if you are using older versions or Phoenix to 4.7.0 in case of own clusters.
... View more
12-30-2016
08:27 AM
After rolling back the configurations have you restarted the server?
... View more
12-30-2016
07:13 AM
Then you need write bad line detection code in your map reduce job same as ImportTsv and add that as counter to job to find the number of bad lines. https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TsvImporterMapper.java#L192
... View more
12-30-2016
07:01 AM
You have mentioned HDP version 2.4.2 and Phoenix Version as 4.7 But HDP 2.4.2 supports Phoenix version 4.4? Can you check the versions properly. You can upgrade to HDP 2.5 to use Name space feature supported. then things should work properly.
... View more
12-30-2016
06:44 AM
2 Kudos
Are you using ImportTsv to import data from HDFS to HBase? If that's the case you can see "Bad Lines" counter value which gives the number of rows not proper as for the column details you have mentioned in the job. Bad Lines If you don't want to continue job in case of bad lines you can pass -Dimporttsv.skip.bad.lines=false to the import tsv job.
... View more
12-21-2016
01:37 PM
1 Kudo
You need to include phoenix-spark-4.4.0-HBase-1.1.jar also to the jars list.
... View more
12-20-2016
12:16 PM
1 Kudo
You can use dynamic columns feature supported by Phoenix for your use case. http://phoenix.apache.org/dynamic_columns.html bq. How do I know how many qualifiers are there for each row Until unless we scan through full row we cannot find the columns. This can be achieved by HBase APIs only. bq. select first and last qualifier data for keys in (1231, 321) If we don't know the columns we need to select all the columns and get first and last column values but dynamic columns won't be returned. You can try extendable views for this use case as well. https://phoenix.apache.org/views.html
... View more
12-12-2016
09:09 AM
2 Kudos
You can create map hbase table into Hive as in below link https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration then use Export/Import feature to replicate data. https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ImportExport
... View more
11-29-2016
02:48 PM
You can use Thrift to connect to HBase through PHP. https://wiki.apache.org/hadoop/Hbase/ThriftApi You can find an example here https://github.com/apache/hbase/blob/master/hbase-examples/src/main/php/DemoClient.php
... View more
11-29-2016
09:51 AM
@ARUN Or else can you try something similar to type convertion. A = load 'hbase://query/select col1,col2.. from TRANSACTION' using org.apache.phoenix.pig.PhoenixHBaseLoader('localhost') as (rowKey:chararray,col_a:int, col_b:double, col_c:chararray);
... View more
11-29-2016
09:40 AM
1 Kudo
Yes Export generate sequence files. If you want to import back the exported data to other HBase table in different cluster then you can use it other wise it won't help. Pig should map the data types properly. Can you try specifying the columns list than * and check. For eg: A = load 'hbase://query/select col1,col2.. from TRANSACTION' using org.apache.phoenix.pig.PhoenixHBaseLoader('localhost');
... View more
11-29-2016
09:06 AM
Better to go with @Ankit Singhal suggestion of using pig/hive or spark to export.
... View more
11-29-2016
08:58 AM
1 Kudo
Phoenix pherf utility exports details like how much time the query took etc..not the data. There is no way to export data into csv from Phoenix. Can't you use Export utility provided by HBase.
... View more
11-28-2016
09:53 AM
There is a still chance that all data might be going to first or last regions of table. So better to find the best split keys based on row key data set to fix this issue.
... View more