Member since
11-14-2015
268
Posts
122
Kudos Received
29
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
953 | 08-07-2017 08:39 AM | |
1814 | 07-26-2017 06:06 AM | |
5621 | 12-30-2016 08:29 AM | |
4249 | 11-28-2016 08:08 AM | |
2981 | 11-21-2016 02:16 PM |
07-26-2017
09:34 AM
1 Kudo
you can't authenticate using username and password in Phoenix/HBase/HDFS, the only way is to go through kerberos authentication.
... View more
07-26-2017
09:22 AM
Can you try something like this:- String querySql= "SELECT * FROM ENVI_DATA where PATIENT_ID=?";
RowMapper<EnviData> rowMapper = newRowMapper<EnviData>(){
@Override
public EnviData mapRow(ResultSet rs,int rowNum) throwsSQLException{
EnviData data =new EnviData();
data.setPatientId(rs.getString(1));
data.setUserRoutes(rs.getArray(7).getArray());
}
return jdbcTemplate.update(querySql, rowMapper, id);
... View more
07-26-2017
09:00 AM
If these are the only property you need from hbase-site.xml , you can create a connection by passing the properties in props. Properties props = new Properties();
props.setProperty(QueryServices.IS_NAMESPACE_MAPPING_ENABLED, Boolean.toString(true));
props.setProperty(QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE, Boolean.toString(false));
Connection conn = DriverManager.getConnection(getUrl(), props);
... View more
07-26-2017
08:55 AM
have you confirmed that "ER_V13_SPLIT_511_GZ_V2_METERKEY" doesn't appear when you do !tables on sqlline. (or you can query system.catalog if any link for this index exists) For a workaround:- you can try creating "ER_V13_SPLIT_511_GZ_V2_METERKEY" from HBase shell using same table descriptor of data table but with just different table name.
... View more
07-26-2017
06:13 AM
you can read the data from Phoenix using phoenix-spark connector and the using spark native APIs , you can join (or union) those data frames into single data frame.
... View more
07-26-2017
06:06 AM
It may be due to https://issues.apache.org/jira/browse/PHOENIX-2169 bq. It's odd to notice that sometimes I am able to upsert 50k register in a query (a few days), and sometimes I am limited to 9k registers (around 2 days) or less It may happen irregularly when UPSERT SELECT running scans and mutating parallely as our ProjectedColumnExpression is not thread safe. so you may try backporting PHOENIX-2169 in your distribution or upgrade to HDP 2.5 or PHOENIX-4.7.
... View more
06-22-2017
01:10 PM
Yeah, it seems like a bug that hint is ignored in the case of global functional index. As a workaround, you can include all columns in the covered index (I mean option#2).
... View more
06-22-2017
01:08 PM
Yeah, it seems like a bug that hint is ignored in the case of global functional index. As a workaround, you can include all columns in covered index (I mean option#2).
... View more
06-22-2017
12:53 PM
Select * is projecting all the columns, option1 suggest you can project only your pk columns and indexed column if you really want to use the non-covered global functional index.
Ah, you mean, hint is also not helping. Let me check this at my side.
... View more
06-22-2017
11:18 AM
SELECT /*+ INDEX(S1.TABLE1 IDX_UPPER) */ * FROM S1.TABLE1 WHERE UPPER(FIRST_NAME) = 'ABC'; It looks like that you are trying to project all the columns of the table but your functional index is not covered(means does not include all the columns of the table). You have following options:- * Either project the pk columns and function select pk,pk2,pk3.., UPPER(FIRST_NAME) from S1.table1 * Or create a covered index. CREATE INDEX IDX_UPPER ON S1.TABLE1 (UPPER(FIRST_NAME)INCLUDE(COL1,COL2...) * Or if you are sure that you will have fewer rows for UPPER(FIRST_NAME) , then you can create non-covered local index.(this will fetch the remaining columns from data table automatically) CREATE LOCAL INDEX IDX_UPPER ON S1.TABLE1 (UPPER(FIRST_NAME)
... View more
04-02-2017
11:52 AM
Is there a table showing how to compare different keyword in explain plans? Ex: “PARALLEL 1-WAY ROUND ROBIN FULL SCAN” versus “PARALLEL 9-WAY FULL SCAN”, which one would be faster if everything else was the same? https://phoenix.apache.org/tuning_guide.html Is there a way to determine if all of my RegionServers are being used during the execution of a particular query? Currently with explain plan ,there is no way to determine that all your regionservers are getting used for the query , although if there is full table scan ,then all the regions will be scanned but still you can't be sure that all regionservers are getting used as it depends upon your balancer policy and the distribution can be checked from master UI. What is the 9-CHUNK in “CLIENT 9-CHUNK 4352857 ROWS 943718447 BYTES” mean? Are more CHUNKs better or worse? https://phoenix.apache.org/tuning_guide.html What is the impact of using functions, such as HOUR(ts) on query execution time? Is the impact 1%, 10%, 50%, etc…? it depends , for eg:- if you are applying hour(ts) where 'ts' is the first part of the primary key and you don't have functional index on it, then the query may be turned into full table scan where the impact range could be anything. although, processing time of evaluating the function on the column value depends upon the function complexity and no. of rows you are scanning.
... View more
04-02-2017
10:55 AM
You can set hive.merge.mapredfiles to true , it will run the another mapreduce job after the original one to combine all files into one. https://cwiki.apache.org/confluence/display/Hive/AdminManual+Configuration
... View more
02-23-2017
05:25 PM
It seems your command is incorrect, you need to put ":" between port and znode. hadoop jar /usr/hdp/current/phoenix-client/phoenix-client.jar org.apache.phoenix.mapreduce.CsvBulkLoadTool -Dfs.permissions.umask-mode=000 --table TEST.FLIGHT_SEGMENT --input /tmp/test/Segments.tsv -z devone1.lab.com:2181:/hbase-unsecure
... View more
12-30-2016
08:32 AM
These properties(phoenix.schema.isNamespaceMappingEnabled & phoenix.schema.mapSystemTablesToNamespace) should not be rolledback. https://phoenix.apache.org/namspace_mapping.html
... View more
12-30-2016
08:29 AM
1 Kudo
I don't know the best way to include hbase-site.xml in squirrel classpath , but people have tried by putting hbase-site.xml in phoenix-client.jar and it seems work for them. https://distcp.quora.com/Connect-and-query-Apache-Phoenix-with-Squirrel-from-Windows https://mail-archives.apache.org/mod_mbox/phoenix-user/201409.mbox/%3CCAF1+Vs8TMeSeUUWS-b7FYkqNgxdrLWVVB0uHQW5fVS0XQPUp+Q@mail.gmail.com%3E
... View more
12-30-2016
06:01 AM
Thanks @Gabriela Martinez for sharing, would you mind creating a separate question by tagging Phoenix and NIFI and answering and accepting the same, as it will benefit the other users who are using NIFI with Phoenix.
... View more
12-18-2016
08:15 AM
If you know that all rows of the table have same no. of columns then you can just get first row (with scan and limit) and parse the columns names for each column family. otherwise @Sergey Soldatov answer is the only way.
... View more
12-14-2016
05:16 AM
What benefit you are expecting by archiving hbase data like "Hadoop Archive"? or Is your purpose to just archive HBase data in any form?
... View more
12-01-2016
07:12 AM
it's great that it works for you..so can you accept the answer now so that it will be helpful for other users.
... View more
11-30-2016
11:05 AM
1 Kudo
it is in hdfs and if you are using HDP then it may be under /apps/hbase/data/data/<namespace>/<tableName>/.tabledesc/
... View more
11-30-2016
10:28 AM
2 Kudos
Table metadata is stored as table descriptor in the corresponding table directory and is read and altered there itself. I don't think that we have any znode where we keep the information of columnfamily during alter or create table.
... View more
11-29-2016
09:42 AM
I don't see much problem with the code. can you try adding a debug point in TotalOrderPartitioner.setConf() // line 88 or something and see why split points are different while reading from partition file.
... View more
11-29-2016
09:02 AM
Are you specifying number of reducers for the job not equal to the no. of regions in your table?
... View more
11-29-2016
08:59 AM
2 Kudos
You can use hive storage handler or pig or spark to do that. https://phoenix.apache.org/hive_storage_handler.html https://phoenix.apache.org/pig_integration.html https://phoenix.apache.org/phoenix_spark.html
... View more
11-28-2016
02:47 PM
can you try adding serialization=PROTOBUF in your connection string. There seems to be property "phoenix.queryserver.serialization mismatch at client and server. jdbc:phoenix:thin:url=http://localhost:8765;serialization=PROTOBUF
... View more
11-28-2016
08:08 AM
1 Kudo
can you check that both phoenix query server and phoenix client versions are same.
... View more
11-21-2016
02:18 PM
1 Kudo
Make sure that you have updated hbase-site.xml in your sqlline class path to have properties to take effect.
... View more
11-21-2016
02:16 PM
There was bug with one api(PhoenixRuntime#getTable()) in HDP 2.2 where case sensitive tables are not handled properly, that's why automatic rebuilding is not happening to bring your disabled index up to date and make it active for use. Now you have two options either you can move to later version of HDP 2.3( or later ) or you can drop the current index and create ASYNC index on your table (as your table is large) and run IndexTool to create a index data for you by using map reduce job. Refer for creating ASYNC index and running IndexTool:-
http://phoenix.apache.org/secondary_indexing.html
... View more
11-18-2016
06:14 AM
There seems to be a bug with automatic rebuild code to understand case sensitive table names. can you tell us which version of HDP or phoenix you are using?
... View more
11-16-2016
10:15 AM
are you running spark job on same cluster or from different cluster? If,it's from different cluster then check if nodes on the cluster have access to lvadcnc06.hk.standardchartered.com
... View more