Member since
11-14-2015
268
Posts
122
Kudos Received
29
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2705 | 08-07-2017 08:39 AM | |
4384 | 07-26-2017 06:06 AM | |
10051 | 12-30-2016 08:29 AM | |
7909 | 11-28-2016 08:08 AM | |
7835 | 11-21-2016 02:16 PM |
10-15-2016
07:09 PM
1 Kudo
It is related to PHOENIX-2434, which has been fixed in HDP 2.5/ apache v4.7.
... View more
10-13-2016
08:18 PM
There is a type , you need to make call at 20050 port instead of 20550. curl -v -X GET -H "Accept: text/xml""http://xxxxx.xxxx.hortonworks.com:20050/"
... View more
10-05-2016
07:18 AM
2 Kudos
can you check which version of java you are using? It should be OpenJDK and 1.7 or above?
... View more
10-03-2016
08:16 AM
you just need to add following two attributes in PUT which you are already creating for inserting data in HBase. put.setAttribute(PhoenixIndexCodec.INDEX_MD, indexMetaData); put.setAttribute(PhoenixIndexCodec.INDEX_UUID, uuidValue); To get indexMetaData,you can refer above snippet.
... View more
09-29-2016
07:19 AM
If writing data table by using hbase API and want that secondary indexes created with Phoenix are also updated, you need to include following attributes in mutations(put/delete) while inserting data to data table so that Phoenix knows what all indexes needs to be updated. for eg:- Put mutation. PTable dataPTable = PhoenixRuntime.getTableNoCache(conn, dataTableFullName);
List<PTable> indexes = dataPTable.getIndexes();
List<IndexMaintainer> maintainers = Lists.newArrayListWithExpectedSize(indexes.size());
for (PTable index : indexes) {
maintainers.add(index.getIndexMaintainer(dataPTable, conn));
}
ImmutableBytesWritable indexMetaDataPtr = new ImmutableBytesWritable(ByteUtil.EMPTY_BYTE_ARRAY);IndexMaintainer.serializeAdditional(dataPTable, indexMetaDataPtr, indexList,conn);
byte[] indexMetaData = ByteUtil.copyKeyBytesIfNecessary(indexMetaDataPtr);
Put put = new Put(CellUtil.cloneRow(cell));
put.setAttribute(PhoenixIndexCodec.INDEX_MD, indexMetaData);
put.setAttribute(PhoenixIndexCodec.INDEX_UUID, uuidValue);dataHtable.put(put) Adjust some variables accordingly in above snippet.
... View more
09-28-2016
12:30 PM
3 Kudos
HDP
2.3
Index Life Cycle(Client events are
in orange)- Columns Names in SYSTEM.CATALOG table:- Query:- “select
TABLE_NAME,DATA_TABLE_NAME,INDEX_TYPE,INDEX_STATE,INDEX_DISABLE_TIMESTAMP from
system.catalog where INDEX_TYPE is not null;” TABLE_NAME:- Name of the index DATA_TABLE_NAME:- Name of the parent data table of index INDEX_TYPE:- GLOBAL(1) LOCAL(2) INDEX_STATE:- Index States (Abbreviation stored in SYSTEM.CATALOG, states
in bold are available to client too):- BUILDING("b") USABLE("e") UNUSABLE("d") ACTIVE("a") INACTIVE("i") DISABLE("x") REBUILD("r") (Below paragraph is from
phoenix site) DISABLE will cause the no
further index maintenance to be performed on the index and it will no longer be
considered for use in queries. REBUILD will completely rebuild
the index and upon completion will enable the index to be used in queries
again. BUILDING will partially rebuild the
index from the last disabled timestamp and upon completion. INACTIVE/UNUSABLE will
cause the index to no longer be considered for use in queries, however index
maintenance will continue to be performed. ACTIVE/USABLE will
cause the index to again be considered for use in queries. Note that a disabled index must be rebuild and
cannot be set as USABLE INDEX_DISABLE_TIMESTAMP:- It is the timestamp at which index is disabled. It will be 0
, if the index is active or disabled by client manually and will non-zero when
index is disabled during write failures. Automatic Rebuild process:- MetaDataRegionObserver is responsible of running
rebuild thread, so upsert select query to update the disabled index is executed
from the regionserver , which is hosting SYSTEM.CATALOG table. INACTIVE and DISABLE indexes are chosen for
rebuild( provided all the regions of index table are online) All Indexes of all tables are built serially. We
build the index from disabled timestamp - phoenix.index.failure.handling.rebuild.overlap.time(default
5 minutes) to SCN Upsert
select query used is "UPSERT /*+ NO_INDEX */ INTO index_table_name(indexedCols)
select dataCols from data_table” Index re-build lifecycle:-
Properties to control automatic Rebuilding :- "phoenix.index.failure.handling.rebuild"(default
true) "phoenix.index.failure.handling.rebuild.interval"
(default 10 seconds) Consistency/Write Failures https://phoenix.apache.org/secondary_indexing.html#Mutable
Tables
... View more
Labels:
09-22-2016
07:01 AM
1 Kudo
It seems you are connecting to wrong port. as 8085 port is bound for web UI. Try connecting to 8080 , if you have started rest server without specifying any port. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_HDP_Reference_Guide/content/hbase-ports.html
... View more
09-21-2016
03:28 PM
It is actually weird, if table is not found in MeteData cache , it should catch the exception and try to update the cache with the server. Not sure, why the exception is propagated out so early.
... View more
09-16-2016
01:21 PM
Have you also tried what @Josh Elser has mentioned on the following thread http://search-hadoop.com/m/9UY0h2etOtv1p28Si2&subj=Phoenix+Spark+JDBC+Kerberos+ and get the root cause of the problem. Could be missing/inaccurate /etc/krb5.conf on the nodes running spark tasks Could try setting the Java system property sun.security.krb5.debug=true in the Spark executors Could try to set org.apache.hadoop.security=DEBUG in log4j config
... View more
09-16-2016
09:59 AM
Then it's a different issue you are facing. can you follow this thread. http://search-hadoop.com/m/9UY0h2etOtv1p28Si2&subj=Phoenix+Spark+JDBC+Kerberos+
... View more