Created on 11-15-2016 06:22 AM - edited 08-18-2019 03:18 AM
hello everyone,
My Phoenix's one index Become unavailable. Before can normal work .
but now other is normal.Details are as follows。x
My phoenix table name is "SysAction".
I Create index sql is create index SysActionLog_idx on "SysActionLog" ("CreateTime", "ModuleCode","AppCode","Invoker","ClientIP")
I execute sql with phoenix shell but find can't use phoenix index. How to solve this problem ?
I dont want to rebulid index because rebulid index may be shut down my hbase cluster
Created 11-21-2016 02:16 PM
There was bug with one api(PhoenixRuntime#getTable()) in HDP 2.2 where case sensitive tables are not handled properly, that's why automatic rebuilding is not happening to bring your disabled index up to date and make it active for use. Now you have two options either you can move to later version of HDP 2.3( or later ) or you can drop the current index and create ASYNC index on your table (as your table is large) and run IndexTool to create a index data for you by using map reduce job.
Refer for creating ASYNC index and running IndexTool:- http://phoenix.apache.org/secondary_indexing.html
Created on 11-18-2016 02:32 AM - edited 08-18-2019 03:18 AM
Thank you !
I find some error in hbase regioneserver log ,TERMINALDATA ’s index state is disable(x)
so now hbase now can rebuild but faild
2016-11-17 06:01:47,352 WARN org.apache.phoenix.coprocessor.MetaDataRegionObserver: ScheduledBuildIndexTask failed! org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table undefined. tableName=TERMINALDATA at org.apache.phoenix.schema.PMetaDataImpl.getTable(PMetaDataImpl.java:241) at org.apache.phoenix.util.PhoenixRuntime.getTable(PhoenixRuntime.java:316) at org.apache.phoenix.coprocessor.MetaDataRegionObserver$BuildIndexScheduleTask.run(MetaDataRegionObserver.java:228) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)
so i check table schema
so I Want to know is there relation with table schema
when i create table i din't creat table schema
my create table sql is
create table if not exists "TerminalData" ( "RowKey" varchar primary Key, "ID" varchar, "CtrlAddress" varchar, "CanSN" varchar, "CtrlVersion" varchar, "Voltage" varchar, "A_Voltage" varchar, "B_Voltage" varchar, "C_Voltage" varchar, "Current" varchar, "A_current" varchar, "B_current" varchar, "C_current" varchar, "RatedPower" varchar, "ReactivePower" varchar, "TotalPowerFactor" varchar, "ZeroLineCurrent" varchar, "VoltageUR" varchar, "CurrentUR" varchar, "DirectVoltage" varchar, "DirectCurrent" varchar, "UpTime" varchar, "FaultState" varchar, "ActivePower" varchar, "ChageBillId" varchar, "DataKey" varchar ) default_column_family = 'd'
Created 11-18-2016 06:14 AM
There seems to be a bug with automatic rebuild code to understand case sensitive table names. can you tell us which version of HDP or phoenix you are using?
Created on 11-19-2016 05:05 AM - edited 08-18-2019 03:18 AM
i change some parameter in hbase-site.xml <property> <name>hbase.client.scanner.timeout.period</name> <value>9200000</value></property> <property> <name>hbase.rpc.timeout</name> <value>9200000</value></property> <property> <name>hbase.regionserver.lease.period</name> <value>9200000</value></property> <property> <name>phoenix.query.timeoutMs</name> <value>9200000</value></property>
Where is the problem?
Created 11-21-2016 02:18 PM
Make sure that you have updated hbase-site.xml in your sqlline class path to have properties to take effect.
Created 11-21-2016 02:16 PM
There was bug with one api(PhoenixRuntime#getTable()) in HDP 2.2 where case sensitive tables are not handled properly, that's why automatic rebuilding is not happening to bring your disabled index up to date and make it active for use. Now you have two options either you can move to later version of HDP 2.3( or later ) or you can drop the current index and create ASYNC index on your table (as your table is large) and run IndexTool to create a index data for you by using map reduce job.
Refer for creating ASYNC index and running IndexTool:- http://phoenix.apache.org/secondary_indexing.html