Member since
11-14-2015
268
Posts
122
Kudos Received
29
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
955 | 08-07-2017 08:39 AM | |
1816 | 07-26-2017 06:06 AM | |
5622 | 12-30-2016 08:29 AM | |
4252 | 11-28-2016 08:08 AM | |
2981 | 11-21-2016 02:16 PM |
11-15-2016
07:18 AM
bq. we are getting null pointer exception, when we try to access the table programmatically, but when we try with hbase console its working, what could be the issue? can you check that your application also has same hbase-site.xml in classpath. bq.when i checked in zkcli get /hbase-secure, the data length is 0 when you do "ls" on your parent znode /hbase-secure by using zkcli, you should see following nodes. if they don't exists ,it means your cluster if formatted or running on different znode.(check all znodes at root by doing "ls /") [zk: localhost:2181(CONNECTED) 1] ls /hbase-secure
[meta-region-server, backup-masters, table, draining, region-in-transition, table-lock, running, master, namespace, hbaseid, online-snapshot, replication, splitWAL, recovering-regions, rs, flush-table-proc]
... View more
11-15-2016
07:13 AM
3 Kudos
Can you first check whether your index is active or not. https://community.hortonworks.com/articles/58818/phoenix-index-lifecycle.html
... View more
10-15-2016
07:09 PM
1 Kudo
It is related to PHOENIX-2434, which has been fixed in HDP 2.5/ apache v4.7.
... View more
10-13-2016
08:18 PM
There is a type , you need to make call at 20050 port instead of 20550. curl -v -X GET -H "Accept: text/xml""http://xxxxx.xxxx.hortonworks.com:20050/"
... View more
10-05-2016
07:18 AM
2 Kudos
can you check which version of java you are using? It should be OpenJDK and 1.7 or above?
... View more
10-03-2016
08:16 AM
you just need to add following two attributes in PUT which you are already creating for inserting data in HBase. put.setAttribute(PhoenixIndexCodec.INDEX_MD, indexMetaData); put.setAttribute(PhoenixIndexCodec.INDEX_UUID, uuidValue); To get indexMetaData,you can refer above snippet.
... View more
09-29-2016
07:19 AM
If writing data table by using hbase API and want that secondary indexes created with Phoenix are also updated, you need to include following attributes in mutations(put/delete) while inserting data to data table so that Phoenix knows what all indexes needs to be updated. for eg:- Put mutation. PTable dataPTable = PhoenixRuntime.getTableNoCache(conn, dataTableFullName);
List<PTable> indexes = dataPTable.getIndexes();
List<IndexMaintainer> maintainers = Lists.newArrayListWithExpectedSize(indexes.size());
for (PTable index : indexes) {
maintainers.add(index.getIndexMaintainer(dataPTable, conn));
}
ImmutableBytesWritable indexMetaDataPtr = new ImmutableBytesWritable(ByteUtil.EMPTY_BYTE_ARRAY);IndexMaintainer.serializeAdditional(dataPTable, indexMetaDataPtr, indexList,conn);
byte[] indexMetaData = ByteUtil.copyKeyBytesIfNecessary(indexMetaDataPtr);
Put put = new Put(CellUtil.cloneRow(cell));
put.setAttribute(PhoenixIndexCodec.INDEX_MD, indexMetaData);
put.setAttribute(PhoenixIndexCodec.INDEX_UUID, uuidValue);dataHtable.put(put) Adjust some variables accordingly in above snippet.
... View more
09-28-2016
12:30 PM
3 Kudos
HDP
2.3
Index Life Cycle(Client events are
in orange)- Columns Names in SYSTEM.CATALOG table:- Query:- “select
TABLE_NAME,DATA_TABLE_NAME,INDEX_TYPE,INDEX_STATE,INDEX_DISABLE_TIMESTAMP from
system.catalog where INDEX_TYPE is not null;” TABLE_NAME:- Name of the index DATA_TABLE_NAME:- Name of the parent data table of index INDEX_TYPE:- GLOBAL(1) LOCAL(2) INDEX_STATE:- Index States (Abbreviation stored in SYSTEM.CATALOG, states
in bold are available to client too):- BUILDING("b") USABLE("e") UNUSABLE("d") ACTIVE("a") INACTIVE("i") DISABLE("x") REBUILD("r") (Below paragraph is from
phoenix site) DISABLE will cause the no
further index maintenance to be performed on the index and it will no longer be
considered for use in queries. REBUILD will completely rebuild
the index and upon completion will enable the index to be used in queries
again. BUILDING will partially rebuild the
index from the last disabled timestamp and upon completion. INACTIVE/UNUSABLE will
cause the index to no longer be considered for use in queries, however index
maintenance will continue to be performed. ACTIVE/USABLE will
cause the index to again be considered for use in queries. Note that a disabled index must be rebuild and
cannot be set as USABLE INDEX_DISABLE_TIMESTAMP:- It is the timestamp at which index is disabled. It will be 0
, if the index is active or disabled by client manually and will non-zero when
index is disabled during write failures. Automatic Rebuild process:- MetaDataRegionObserver is responsible of running
rebuild thread, so upsert select query to update the disabled index is executed
from the regionserver , which is hosting SYSTEM.CATALOG table. INACTIVE and DISABLE indexes are chosen for
rebuild( provided all the regions of index table are online) All Indexes of all tables are built serially. We
build the index from disabled timestamp - phoenix.index.failure.handling.rebuild.overlap.time(default
5 minutes) to SCN Upsert
select query used is "UPSERT /*+ NO_INDEX */ INTO index_table_name(indexedCols)
select dataCols from data_table” Index re-build lifecycle:-
Properties to control automatic Rebuilding :- "phoenix.index.failure.handling.rebuild"(default
true) "phoenix.index.failure.handling.rebuild.interval"
(default 10 seconds) Consistency/Write Failures https://phoenix.apache.org/secondary_indexing.html#Mutable
Tables
... View more
- Find more articles tagged with:
- apache-phoenix
- Data Processing
- FAQ
- Phoenix
Labels:
09-22-2016
07:01 AM
1 Kudo
It seems you are connecting to wrong port. as 8085 port is bound for web UI. Try connecting to 8080 , if you have started rest server without specifying any port. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_HDP_Reference_Guide/content/hbase-ports.html
... View more
09-21-2016
03:28 PM
It is actually weird, if table is not found in MeteData cache , it should catch the exception and try to update the cache with the server. Not sure, why the exception is propagated out so early.
... View more
09-16-2016
01:21 PM
Have you also tried what @Josh Elser has mentioned on the following thread http://search-hadoop.com/m/9UY0h2etOtv1p28Si2&subj=Phoenix+Spark+JDBC+Kerberos+ and get the root cause of the problem. Could be missing/inaccurate /etc/krb5.conf on the nodes running spark tasks Could try setting the Java system property sun.security.krb5.debug=true in the Spark executors Could try to set org.apache.hadoop.security=DEBUG in log4j config
... View more
09-16-2016
09:59 AM
Then it's a different issue you are facing. can you follow this thread. http://search-hadoop.com/m/9UY0h2etOtv1p28Si2&subj=Phoenix+Spark+JDBC+Kerberos+
... View more
09-16-2016
07:48 AM
And along with that, just have hbase-site.xml in the class path of spark. you can add hbase-site.xml in spark conf directory of all nodes or add properties needed in spark-defaults.conf. OR(try) spark.driver.extraClassPath=/usr/hdp/current/phoenix-client/phoenix-client-spark.jar:/etc/hbase/conf/hbase-site.xml spark.executor.extraClassPath=/usr/hdp/current/phoenix-client/phoenix-client-spark.jar:/etc/hbase/conf/hbase-site.xml
... View more
09-16-2016
07:45 AM
As an improvement, we could get namespace mapping properties from server at the client so that every client doesn't need to specify them, have raised the jira for the same https://issues.apache.org/jira/browse/PHOENIX-3288
... View more
09-16-2016
07:41 AM
1 Kudo
For trace 1 (if you are using sqlline.py). can you check your <PHOENIX_HOME>/bin directory and remove if there is any hbase-site.xml and try. If you are using any java program, you need to ensure hbase-site.xml in classpath or you are adding these properties while creating connection. For trace 2 (spark job) You need to include hbase-site.xml in the classpath of spark like this:- you can add hbase-site.xml in spark conf directory of all nodes or add properties needed in spark-defaults.conf.
OR(try)
spark.driver.extraClassPath= /usr/hdp/current/phoenix-client/phoenix-client-spark.jar:/etc/hbase/conf/hbase-site.xml
spark.executor.extraClassPath= /usr/hdp/current/phoenix-client/phoenix-client-spark.jar:/etc/hbase/conf/hbase-site.xml
... View more
09-16-2016
07:31 AM
1 Kudo
May be you are hitting following bug :- https://issues.apache.org/jira/browse/PHOENIX-2817 Would you mind trying the workaround mention at the end of the ticket:- For people waiting on this fix there is a very simple workaround provided that you use the default zk port and path.
It's as simple as only listing the the server names "server1,server2" so the plugin builds the url correctly:
jdbc:phoenix:server1,server2:2181:/hbase
Then the delegation tokens setup by spark-submit take care of security so Phoenix doesn't need to do anything with principals or keytabs.
The thing I find a bit confusing is that for other tools the zookeeper quorum URL includes the port and the path, while for Phoenix the zk quorum property is just the server list.
... View more
09-01-2016
12:12 PM
it seems jstack is from different java version than the version on which hbase is running.
check your JAVA_HOME for hbase and use jstack from the path $JAVA_HOME/bin/jstack
... View more
09-01-2016
12:04 PM
You need to execute the command with the same user "hbase"
... View more
09-01-2016
11:01 AM
you can take a thread dump by running "jstack <master processid> > /tmp/hmaster_1.jstack".. take multiple of those during initilization.
... View more
09-01-2016
09:53 AM
can you attach the master logs, take and attach multiple jstack (in internval of 1 min) during initilization of master.
... View more
08-31-2016
06:22 AM
It seems that server adhered this property "hbase.client.scanner.timeout.period" for lease expiration but the client has not . can you check that your hbase-site.xml is in class path of your phoenix client(application/sqlline) as well.
... View more
08-26-2016
01:06 PM
@sankar rao, Actually I don't have the yarn cluster ready to confirm you what log lines needs to be searched in the container logs for the file names. Probably , it will be better if you can raise a support case so that a dedicated team can look into the issue specifically.
As it depends, How is the data loaded in the table/hdfs , how they are zipped, which input format you are using etc.?
... View more
08-25-2016
02:45 PM
would you mind try again by setting this phoenix.query.force.rowkeyorder to true too. as phoenix.query.rowKeyOrderSaltedTable got deprecated with phoenix.query.force.rowkeyorder in some release.
... View more
08-25-2016
01:56 PM
2 Kudos
"phoenix.query.rowKeyOrderSaltedTable" is the client property, can you make sure hbase-site.xml is in the classpath of your application/sqlline.
... View more
08-22-2016
11:44 AM
@sankar rao, bq. why should i decompress the file..? In order to confirm that file selected is actually corrupted or not. bq. how can i get ..failed container logs because.. i could see 12 directory paths under "yarn.nodemanager.log-dirs" property and .i just confused where should i find the application logs Actually, I don't remember the actual keyword to search in the logs ,but you can check syslogs for container with id similar to _14435*_237788_1_01_000062_1 and look for line saying "processing file" or something similar.
... View more
08-22-2016
09:25 AM
It seems your zip file present in table directory is corrupted. Try decompress the file directly with unzip utility(you may get the file name from the failed container logs).
... View more
08-11-2016
01:29 PM
As sql dialect keywords are case insensitive , so Double quotes(") just helps parser to avoid converting them back to upper case. May be you can create a another view over your current view with case insensitive column name. You may check the actual syntax on the site, JFYR create view view2(col1 integer, col2 integer) as select "col1","col2" from view1;
... View more
08-11-2016
01:18 PM
3 Kudos
It seems your hbase:acl table is not created. can you check hbase-site.xml at master side as well it should have:- <property>
<name>hbase.coprocessor.master.classes</name>
<value>org.apache.hadoop.hbase.security.access.AccessController</value>
</property> try restarting your cluster as postStartMaster step should create this table, once you are able to do scan 'hbase:acl' , you will not see above error "ERROR: DISABLED:Security features are not available"
... View more
08-10-2016
09:18 AM
I don't know about any reset service in hbase, would you mind sharing the reference from where you read about it?
... View more
08-09-2016
09:08 AM
1 Kudo
@Saurabh Rathi, the workaround , you can do is to declare your field "eid" as VARCHAR and use TO_NUMBER function to cast it to number during query if required. Or, @Josh Elser said, while storing the data in hbase , you can encode your integer to phoenix representation of INT by using below API.(same for other data types) byte[] byte_representation = PInteger.INSTANCE.toBytes(<integer_value>); Please see for more details:-
https://phoenix.apache.org/faq.html#How_I_map_Phoenix_table_to_an_existing_HBase_table
... View more