I have a table in HBASE made through phoenix having almost 1 billion records When I run the following query "select count(*) from table" via SQLLINE OR SQUIRREL Phoenix return a time out error "org.apache.phoenix.exception.PhoenixIOException: Failed to get result within timeout, timeout=60000ms" I set my "HBASE_CONF_DIR" and "HBASE_CONF_PATH" environment variable to to use "/etc/hbase/conf"
and have also increased "RPC timeout" and "phoenix timeout". But it always return timeout=60000ms Is there any setting in configuration which require modification?
I have included all configs mentioned in above article but query fails again I have loaded that table in spark, and then run the count. It gives the result But query fail for sqlline and squirrel
i also faced the same issue , but it worked for me by changing timeout config(though it took some time for these chnages to come into effect).
I changed the Phoenix Query Timeout config(via Ambari) and tried the query and i got the same timeoutException of default 60 secs.
I verified the updated value in hbase-site.xml config file and tried the same query after some time(around 1hr or may be earlier as well it would have worked) and query executed successfully which took around 90secs. So, it took the timeout changes successfully.
I have the same issue. If I do 'explain select count(*) from table', it returns
CLIENT 3205-CHUNK 2362713879 ROWS 993106407961 BYTES
I know I have less than 2362713879 rows in this table.
Based on the configs here are the settings that helped me. I am using Dbeaver to connect to Phoenix and queries were timing out as mentioned.
In the hbase-conf.xml file i did the following changes
hbase.rpc.timeout = 1200000
hbase.regionserver.lease.period = 1200000
hbase.client.scanner.caching = 1000
hbase.client.scanner.timeout.period = 1200000
I believe only phoenix.query.timeoutMs should work in the problem posed by original poster.
Please try and see if it helps you.