Support Questions

Find answers, ask questions, and share your expertise
Announcements
Check out our newest addition to the community, the Cloudera Data Analytics (CDA) group hub.

Phoenix index becom unavaiable

Explorer

hello everyone,

My Phoenix's one index Become unavailable. Before can normal work .

but now other is normal.Details are as follows。x

My phoenix table name is "SysAction".

I Create index sql is create index SysActionLog_idx on "SysActionLog" ("CreateTime", "ModuleCode","AppCode","Invoker","ClientIP")

I execute sql with phoenix shell but find can't use phoenix index. How to solve this problem ?

I dont want to rebulid index because rebulid index may be shut down my hbase cluster

9434-temp.png

1 ACCEPTED SOLUTION

There was bug with one api(PhoenixRuntime#getTable()) in HDP 2.2 where case sensitive tables are not handled properly, that's why automatic rebuilding is not happening to bring your disabled index up to date and make it active for use. Now you have two options either you can move to later version of HDP 2.3( or later ) or you can drop the current index and create ASYNC index on your table (as your table is large) and run IndexTool to create a index data for you by using map reduce job.

Refer for creating ASYNC index and running IndexTool:- http://phoenix.apache.org/secondary_indexing.html

View solution in original post

14 REPLIES 14

Super Guru

what is your primary key and is the primary key in the where clause? did you create local or global index?

Explorer

my primary key is "RowKey" IS NOT IN WHERE CLAUSE

Can you first check whether your index is active or not.

https://community.hortonworks.com/articles/58818/phoenix-index-lifecycle.html

Explorer

Thanks.

I Find my table's index state is x ,How I can change this index state? rebuild it ?

9485-无标题.png

Explorer

Thanks.

I Find my table's index state is x ,How I can change this index state? rebuild it ?

9486-无标题.png

@pan bocun

From the snapshot you have provided writing to index got failed at timestamp 1478692767291 that's why the indexes got disabled. They will be automatically rebuild by the Phoenix in the background. If you see the index failure taking more time then you can drop the index and recreate it.

Explorer

I USE THIS commond

ALTER INDEX IF EXISTS SysActionLog_idx ON "SysActionLog" REBUILD;

but error

the error is timeout. so i change phoeinx query sql timeout

but index current state is

9523-002.png

i execute rebuild command but the other error occured,I cant drip index ,because my cluster is not strong ,cant create index success

this table is 800G.

9525-003.png

Explorer

My HDP Version HDP-2.4.2.0-258, I diont change config about phoenix default paramater

but I didn't find my index is rebuilded automatically because this index'state is always rebuild(b)

why? Thanks !

The index rebuild does full table scan of data table there is a chance of timeout. Can you try increasing the timeout values of below properties and retry rebuilding index. Once you add/change the configurations then you need to export HBASE_CONF_DIR or HBASE_CONF_PATH with directly having hbase-site.xml.

hbase.client.scanner.timeout.period=1200000 hbase.rpc.timeout=1200000 hbase.regionserver.lease.period = 1200000 phoenix.query.timeoutMs = 600000

Explorer

Thank you !

I find some error in hbase regioneserver log ,TERMINALDATA ’s index state is disable(x)

so now hbase now can rebuild but faild

2016-11-17 06:01:47,352 WARN org.apache.phoenix.coprocessor.MetaDataRegionObserver: ScheduledBuildIndexTask failed! org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table undefined. tableName=TERMINALDATA at org.apache.phoenix.schema.PMetaDataImpl.getTable(PMetaDataImpl.java:241) at org.apache.phoenix.util.PhoenixRuntime.getTable(PhoenixRuntime.java:316) at org.apache.phoenix.coprocessor.MetaDataRegionObserver$BuildIndexScheduleTask.run(MetaDataRegionObserver.java:228) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)

so i check table schema

9564-011.png

so I Want to know is there relation with table schema

when i create table i din't creat table schema

my create table sql is

create table if not exists "TerminalData" ( "RowKey" varchar primary Key, "ID" varchar, "CtrlAddress" varchar, "CanSN" varchar, "CtrlVersion" varchar, "Voltage" varchar, "A_Voltage" varchar, "B_Voltage" varchar, "C_Voltage" varchar, "Current" varchar, "A_current" varchar, "B_current" varchar, "C_current" varchar, "RatedPower" varchar, "ReactivePower" varchar, "TotalPowerFactor" varchar, "ZeroLineCurrent" varchar, "VoltageUR" varchar, "CurrentUR" varchar, "DirectVoltage" varchar, "DirectCurrent" varchar, "UpTime" varchar, "FaultState" varchar, "ActivePower" varchar, "ChageBillId" varchar, "DataKey" varchar ) default_column_family = 'd'

There seems to be a bug with automatic rebuild code to understand case sensitive table names. can you tell us which version of HDP or phoenix you are using?

Explorer
  • I use hdinsight, HDinsight version is

    9619-090.png

    i change some parameter in hbase-site.xml <property> <name>hbase.client.scanner.timeout.period</name> <value>9200000</value></property> <property> <name>hbase.rpc.timeout</name> <value>9200000</value></property> <property> <name>hbase.regionserver.lease.period</name> <value>9200000</value></property> <property> <name>phoenix.query.timeoutMs</name> <value>9200000</value></property>
  • but i Encounter problems this table is 700G
  • My cluster is 5 regionserer(8 core,14G)
  • 9620-012.png

Where is the problem?

Make sure that you have updated hbase-site.xml in your sqlline class path to have properties to take effect.

There was bug with one api(PhoenixRuntime#getTable()) in HDP 2.2 where case sensitive tables are not handled properly, that's why automatic rebuilding is not happening to bring your disabled index up to date and make it active for use. Now you have two options either you can move to later version of HDP 2.3( or later ) or you can drop the current index and create ASYNC index on your table (as your table is large) and run IndexTool to create a index data for you by using map reduce job.

Refer for creating ASYNC index and running IndexTool:- http://phoenix.apache.org/secondary_indexing.html

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.