Member since
11-14-2015
268
Posts
122
Kudos Received
29
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1349 | 08-07-2017 08:39 AM | |
2193 | 07-26-2017 06:06 AM | |
6292 | 12-30-2016 08:29 AM | |
4811 | 11-28-2016 08:08 AM | |
3676 | 11-21-2016 02:16 PM |
11-15-2018
06:35 PM
1 Kudo
you may need to check your operating system limits i.e no. of threads allowed to run for a user, you can use "ulimit –u" or cat /proc/sys/kernel/threads-max on the shell. Try increasing the limit echo 100000 > /proc/sys/kernel/threads-max
... View more
10-11-2018
04:14 AM
no mapping is required for PK columns as all PK columns are concatenated together to form a row key.
... View more
10-09-2018
09:19 PM
Check if columns "COLUMN_QUALIFIER, COLUMN_NAME" of SYSTEM.CATALOG can help.
... View more
09-26-2018
09:01 PM
if you have existing data in your HBase, it's possible that is not serialized in the way Phoenix deserializes it for reading and hence the Illegal data exception, so it is recommended that you try declaring all your fields Varchar for string types or Unsigned_int for integer types. If you want to use other data types, it is better you insert data and read through Phoenix only.
... View more
09-20-2018
12:20 AM
Yes, it's a bug , please raise a JIRA under https://issues.apache.org/jira/browse/PHOENIX
... View more
08-15-2018
09:27 PM
1 Kudo
yes @William Prendergast , it seems that when maxlength of the column(varchar(256)) doesn't match with the length of the literal (lenght(test)=4), predicate results to be false. select * from t where name like 'test'; As a workaround, you can specify columns without maxlength , I have raised https://issues.apache.org/jira/browse/PHOENIX-4850 for the actual fix. create table t ( ID VARCHAR NOT NULL PRIMARY KEY, NAME VARCHAR);
... View more
08-01-2018
05:45 PM
You are affected by bug PHOENIX-3112.
... View more
07-31-2018
09:17 PM
can you try 'table_conf_unset' instead of 'table_att_unset', I just updated my answer.
... View more
07-31-2018
09:09 PM
you can unset each configuration separately like this:- alter 'my_table', METHOD => 'table_conf_unset', NAME => 'kafka.notification.broker'
alter 'my_table', METHOD => 'table_conf_unset', NAME => 'kafka.notification.health.topic'
alter 'my_table', METHOD => 'table_conf_unset', NAME => 'kafka.some.topic'
... View more
07-10-2018
05:33 PM
how come your query is running without GROUP BY clause when you have aggregations and projection? can you please share the exact query and DDL to debug better.
... View more
06-08-2018
09:43 PM
No actually, I'm unable to find TracingCompat.java in our HDP 2.6.0 version, so curious how did you get it. Are you sure your java application is using the right dependency for Phoenix?
... View more
06-08-2018
09:24 PM
you can't keep statement outside of method or constructor except declaration, static blocks in java.
... View more
06-08-2018
08:57 PM
Configuration config = HBaseConfiguration.create(); Either you can add them config.set("zookeeper.znode.parent","/hbase-unsecure") and config.set("hbase.zookeeper.quorum","<server-name>) Or you can directly include server hbase-site.xml in your application classpath so that HBaseConfiguration.create() will add them automatically.
... View more
06-08-2018
08:27 PM
Either the zookeeper or znode is incorrect. * parent znode for hdp is generally /hbase-unsecure or /hbase-secure (not /hbase) * check you are giving the right zookeeper address while connecting. If you are not specifying them manually, then make sure that right hbase-site.xml is in your application classpath.
... View more
06-08-2018
06:59 PM
I'm not that much familiar with the driver you are using but recently we incorporated https://github.com/lalinsky/python-phoenixdb. For usage:- http://python-phoenixdb.readthedocs.io/en/latest/ Related jira:- https://issues.apache.org/jira/browse/PHOENIX-4636
... View more
06-08-2018
06:54 PM
"UPDATE STATISTICS" will not run at the frequency specified by "phoenix.stats.updateFrequency" as this property is just to ensure that local client cache is refreshed with the stats from the SYSTEM.STATS at this frequency. So, you still need to run "UPDATE STATISTICS" or compaction to have your stats updated. Sorry , I know that documentation on apache for this property is little confusing, I'll update it soon for more clarity.
... View more
06-08-2018
06:33 PM
It seems concurrency problem, the row with same primary key is getting updated by two mappers in two different transactions. Probably changing -Dsqoop.export.statements.per.transaction to 1 can help in shortening your transactions and avoid locks held for a long time.
... View more
11-30-2017
11:31 AM
Did your query work fine if you don't use the index? like with /*+no_index*/ hint. If it is failing with index only, then It may be possible that your local index got corrupted due to a split or something, have you tried dropping and creating it again or can you try upgrading the cluster to latest version.
... View more
11-30-2017
11:25 AM
Currently , we don't support index involving two tables. you can index ZONES.geom (include ZONES.zone in your covered index) and index 'POINT('||TRIPS.PICKUP_LONGITUDE||' '||TRIPS.PICKUP_LATITUDE||')' on YELLOW_TAXI_TRIPS separately to improve your query.
... View more
10-31-2017
03:01 PM
1 Kudo
No, you can't specify any encoding scheme and compression for HBase table from hive, you need to do it from the HBase shell. There are limited encoding schemes which are applicable for HFile(FASTDIFF,DIFF,PREFIX) and doesn't include Avro but you can store Avro data in HBase column and define the schema for that data by using Hive. https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration#HBaseIntegration-AvroDataStoredinHBaseColumns.1
... View more
10-12-2017
11:06 AM
Sorry for being late. If you are still seeing this with local index, Can you please describe your table in HBase shell and paste an output here.
... View more
10-03-2017
06:31 AM
It seems your data is monotonically increasing and the keys for the data load belongs to a single region resulting in hot-spotting. This is a general problem with any key-value store if the rowkey is not chosen carefully. If you don't have a row key which is non-monotonic or random in nature then you should look for hashing your key or salting(appending it with cyclic numbers although not recommended for point look-ups). if you think this is happening during the initial load, then pre-split (https://hbase.apache.org/book.html#_shell_tricks) while creating a table or splitting the hot region after the first load is the option.
... View more
09-13-2017
11:30 AM
you can check if all the rows are actually deleted or not by reading a HFile and look for Delete markers. ${HBASE_HOME}/bin/hbase org.apache.hadoop.hbase.io.hfile.HFile -v -f hdfs://10.81.47.41:8020/hbase/TEST/1418428042/DSMP/4759508618286845475
... View more
09-13-2017
08:34 AM
Have you verified the client jar you are using in your application. it should not be of newer version than server jar.
... View more
09-13-2017
08:29 AM
NONE means no compaction is currently running (or completed) , if state return MAJOR , it means major compaction is still running. Either your table is small or you have a single file in each region resulting in major compaction to complete soon and showing state as NONE
... View more
09-13-2017
08:24 AM
can you check that whether the HFiles are holding up that space or recovered.edits hdfs dfs -du -h /hbase/data/default/testTable/0bce2f3457622bf79f75222e9c3107a4/* And, also make sure that you have KEEP_DELETED_CELLS=>'FALSE' hbase(main):004:0> describe 'testTable'
Table testTable is ENABLED
testTable
COLUMN FAMILIES DESCRIPTION
{NAME => 'f1', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE',
BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} If not , you can turn it off hbase> hbase> alter ‘t1′, NAME => ‘f1′, KEEP_DELETED_CELLS => true And also, after deleting the data execute flush 'testTable' before executing the major compaction on the table.
... View more
09-13-2017
07:18 AM
CompactionState.NONE : means no compaction is currently running. bq. And major compaction is an asychronous process, how to predict the wait time to get the state form the client API You need to keep on polling the API with some arbitrary wait time.
... View more
09-13-2017
07:05 AM
It seems, version of phoenix-client/spark.jar you are using for your job is of latest version than the phoenix-server.jar deployed on ip-192-168-181-203.ca-central-1.compute.internal. Please update the phoenix-server.jar on all regionservers with the version you are using for the client or update the client jar to use the version of deployed phoenix-server.jar This is as per our backward compatibility contract:- https://phoenix.apache.org/upgrading.html
... View more
09-13-2017
06:15 AM
bq. 1) How to figure out the completion status for the compaction, since hbase client api majorCompact is an Asynchorous process you can use below API. CompactionState compactionState = admin.getCompactionState(table.getName()); 2) Is it mandatory to wait until compaction process completion , to query hbase for real time process Check your resources consumption with compaction as it impacts I/O, CPU usage and network . In standard server configuration , it is fine to run real time process during compaction.
... View more