Member since
11-14-2015
268
Posts
122
Kudos Received
29
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2681 | 08-07-2017 08:39 AM | |
4328 | 07-26-2017 06:06 AM | |
9926 | 12-30-2016 08:29 AM | |
7823 | 11-28-2016 08:08 AM | |
7686 | 11-21-2016 02:16 PM |
11-29-2016
09:02 AM
Are you specifying number of reducers for the job not equal to the no. of regions in your table?
... View more
11-29-2016
08:59 AM
2 Kudos
You can use hive storage handler or pig or spark to do that. https://phoenix.apache.org/hive_storage_handler.html https://phoenix.apache.org/pig_integration.html https://phoenix.apache.org/phoenix_spark.html
... View more
11-28-2016
02:47 PM
can you try adding serialization=PROTOBUF in your connection string. There seems to be property "phoenix.queryserver.serialization mismatch at client and server. jdbc:phoenix:thin:url=http://localhost:8765;serialization=PROTOBUF
... View more
11-28-2016
08:08 AM
1 Kudo
can you check that both phoenix query server and phoenix client versions are same.
... View more
11-21-2016
02:18 PM
1 Kudo
Make sure that you have updated hbase-site.xml in your sqlline class path to have properties to take effect.
... View more
11-21-2016
02:16 PM
There was bug with one api(PhoenixRuntime#getTable()) in HDP 2.2 where case sensitive tables are not handled properly, that's why automatic rebuilding is not happening to bring your disabled index up to date and make it active for use. Now you have two options either you can move to later version of HDP 2.3( or later ) or you can drop the current index and create ASYNC index on your table (as your table is large) and run IndexTool to create a index data for you by using map reduce job. Refer for creating ASYNC index and running IndexTool:-
http://phoenix.apache.org/secondary_indexing.html
... View more
11-18-2016
06:14 AM
There seems to be a bug with automatic rebuild code to understand case sensitive table names. can you tell us which version of HDP or phoenix you are using?
... View more
11-16-2016
10:15 AM
are you running spark job on same cluster or from different cluster? If,it's from different cluster then check if nodes on the cluster have access to lvadcnc06.hk.standardchartered.com
... View more
11-15-2016
07:18 AM
bq. we are getting null pointer exception, when we try to access the table programmatically, but when we try with hbase console its working, what could be the issue? can you check that your application also has same hbase-site.xml in classpath. bq.when i checked in zkcli get /hbase-secure, the data length is 0 when you do "ls" on your parent znode /hbase-secure by using zkcli, you should see following nodes. if they don't exists ,it means your cluster if formatted or running on different znode.(check all znodes at root by doing "ls /") [zk: localhost:2181(CONNECTED) 1] ls /hbase-secure
[meta-region-server, backup-masters, table, draining, region-in-transition, table-lock, running, master, namespace, hbaseid, online-snapshot, replication, splitWAL, recovering-regions, rs, flush-table-proc]
... View more
11-15-2016
07:13 AM
3 Kudos
Can you first check whether your index is active or not. https://community.hortonworks.com/articles/58818/phoenix-index-lifecycle.html
... View more