Member since
02-19-2016
158
Posts
69
Kudos Received
24
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
666 | 11-29-2017 08:47 PM | |
926 | 10-24-2017 06:37 PM | |
10986 | 08-04-2017 06:58 PM | |
1009 | 05-15-2017 06:42 PM | |
1217 | 03-27-2017 06:36 PM |
09-13-2017
05:28 PM
You may just ignore this error. More details can be found at https://issues.apache.org/jira/browse/ZEPPELIN-2833
... View more
09-12-2017
05:09 PM
1 Kudo
It really depends on the version of Phoenix and the way how you are using it. For 4.7 there is a problem that MR bulk load works incorrectly with ROW_TIMESTAMP fields (it overrides user timestamp with server one). For 4.4 parallel writes to the same row from different clients may cause index getting out of sync. Another possible reason is outdated statistic when SYSTEM.STATS has incorrect boundaries for some region, so part of the data is not covered by scans. This can be fixed by updating statistics for the table. The first thing I would suggest is check whether the row count in physical table is equal to the value that select count produce.
... View more
09-07-2017
05:16 PM
Check that you have /user/tcb directory on the HDFS. Log in as hdfs user and run following commands: hadoop fs -mkdir /user/tcb hadoop fs -chown tcp /user/tcb
... View more
08-29-2017
06:26 PM
As @ssattiraju mentioned you may use a file with commands providing it as a command line parameter. One quick note - if one of the commands fail, the script will stop executing. To avoid it you may use just simple redirect like: phoenix-sqlline localhost < file.sql
... View more
08-08-2017
12:52 AM
That's an incorrect approach. You don't need to add xml files to the jars. As I already mentioned before, you need to add directories where those files located, not files themselves. That's how java classpath work. It accepts jars and directories only. So if you need a resource in the java classpath, you need to have it in a jar file (like you did) OR put the parent directory to the classpath. In Squirrel it can be done in the Extra classpath tab of the Driver configuration:
... View more
08-04-2017
07:21 PM
Actually that's supposed to be something like 5 minutes by default. So, check whether you have any old snapshots that you don't need anymore.
... View more
08-04-2017
06:58 PM
3 Kudos
Check whether you have hbase.master.hfilecleaner.ttl configuration property in hbase-site.xml. It defines TTL for archived files. Archive directory can keep: 1. old WAL files 2. Old region files after compaction 3. files for snapshots. I believe that you have some old snapshots and that's why you have so big archive directory. Delete snapshots that are not required and those files will be deleted automatically.
... View more
- Tags:
- HBase
07-28-2017
07:33 PM
I'm talking about the config directories. Those are : /etc/hbase/conf and /etc/hadoop/conf. Some versions of HDP have a copy of core-site.xml in the hbase conf dir (you may check it manually). The only jar you need to add to the driver configuration is /usr/hdp/current/phoenix-client/phoenix-client.jar. Don't add anything else.
... View more
07-28-2017
07:14 PM
Make sure that both hbase and hadoop conf dirs are in the classpath. It can be configured through Extra Class Path tab of the Phoenix driver. Also make sure that you added directories, but not xml files. Similar type of timeouts usually happen because HBase client (which is a part of Phoenix) to detect secured environment requires from cluster both hbase-site.xml and core-site.xml. If it doesn't find that both hbase and hadoop are secured, it tries to use plain connection to region servers which will be timed out.
... View more
07-21-2017
08:29 PM
You have to create Spark UDF with the similar functionality. There is no way to register phoenix UDF in spark due API difference.
... View more
07-20-2017
08:18 PM
You need to install phoenix-server.jar to all Region and Master servers. MetaDataEndpointImpl
... View more
06-22-2017
07:03 PM
make sure that you are using the correct version of phoenix-client.jar . For any OS it's highly recommended to have hbase-site.xml the same as the server has and set HBASE_CONF_DIR pointing to the directory where this file is located.
... View more
06-21-2017
06:37 PM
Are you using HDP? SYSTEM.MUTEX is not used in Phoenix 4.7 which is part of HDP packages. If your question is about the release from Apache site, please be more specific about the version.
... View more
06-12-2017
07:08 PM
Could you please provide the spark and phoenix versions (or just HDP version) you are using.
... View more
05-15-2017
06:42 PM
1 Kudo
256 is the default value for hbase.hconnection.threads.max phoenix.client.connection.max.allowed.connections was introduced recently in Apache Phoenix 4.10 only. So it's not relevant to HDP 2.4 nor even more recent HDP 2.6.
... View more
05-10-2017
07:18 PM
Try to search HCC for zeppelin and phoenix keywords. Most of the problems with configuration and common issues were previously covered here. For example you may need to update libraries in zeppelin configuration to get it working with Phoenix: Zeppelin with Phoenix on HDP
... View more
03-27-2017
10:31 PM
Once we support custom select (PHOENIX-1505, PHOENIX-1506) it will be possible to do that (like ... select column_1 as c1, ... ). Honestly speaking I can't say when it will be done.
... View more
03-27-2017
06:36 PM
1 Kudo
No, view has the same column names.
... View more
03-10-2017
07:12 PM
1 Kudo
The PLong.toObject is using the regular java Long.parseLong method. IllegalDataException is thrown in case if parsing throws the NumberFormatException. So I can only suggest that you have some incorrect data. In case of CSV data it usually some non numeric chars in the input string.
... View more
02-24-2017
06:44 PM
Can you check master log for the exception (it usually located at /var/log/hbase or $HBASE_HOME/logs, depending on the way you installed HBase). It usually quite easy to understand why the service was stopped.
... View more
02-24-2017
06:39 PM
1 Kudo
There is no partitioning in Apache Phoenix. It supports salting: https://phoenix.apache.org/salted.html The easiest way to move data is to create a new table with salting and copy your data using upsert select statement
... View more
02-23-2017
06:54 PM
That may happen if PQS is already running. try to run ps -aef|grep queryserver to check it. You may kill it and start again using Ambari. If it's not there, check the logs for the exception. You may find it at /var/log/hbase/phoenix-hbase-server.log
... View more
02-15-2017
08:51 PM
Are you using OpenTSDB? There is a known issue: https://github.com/OpenTSDB/asynchbase/issues/153 The only thing I can suggest at the moment is to switch to 'authentication' instead of privacy until the issue is not resolved.
... View more
02-15-2017
05:57 PM
2 Kudos
There are two types of upserts in Phoenix. One is upsert values and it doesn't support where clause. Another is upsert select. You need the last one and it should look like upsert into table1(id, "column2") select id,'replacing string' from table1 where "column2" is null and <other conditions>; Don't forget that you must specify primary key values for any upsert statement (id in my example).
... View more
02-09-2017
09:20 PM
1 Kudo
For HBase it would be nice to run major compaction for tables to increase data locality.
... View more
02-08-2017
10:05 PM
It may happen if your YARN cluster has limited resources and they are occupied by other applications. Check the YARN UI for the application status and free resources. Use yarn application -kill appID to remove those that were stuck by some reason.
... View more
02-08-2017
06:22 PM
1. Please check, that the /etc/hbase/conf is in the classpath or set HBASE_CONF_DIR env to this path (I expect that you are running it on the node where hbase client is deployed, otherwise copy it from your cluster to the node where client is running). 2. Before running a custom application make sure that you are able to access HBase using the regular hbase shell. If it doesn't work, check Ambari UI that both master and region servers are running 3. Not necessary, but recommended to build your application with HDP HBase libraries. For maven include the HDP repo into your project <repository>
<id>public</id>
<name>hwx</name>
<url>http://nexus-private.hortonworks.com/nexus/content/groups/public/</url>
</repository> and set the corresponding version for HBase libraries like 1.1.2.2.5.0.0-1245 (for HDP 2.5)
... View more
01-31-2017
10:33 PM
It's not clear why kinit doesn't work. You may try to get log from Kerberos client using KRB5_TRACE env variable: KRB5_TRACE=/tmp/log kinit -k -t /etc/security/keytabs/hbase.headless.keytab hbase-Sandbox@EXAMPLE.COM Usually it helps to identify the problem. Most common problems: principal was modified after keytab creation (so you need to regenerate keytab), keytab was created without -norandkey option (but usually in this case kinit with password would not work). But I would suggest avoid using service tickets, but grant permissions to the hbase_user1 for tables it need to access. Use hbase shell to do that grant 'hbase_user1', 'RWCA', 'table_name'
... View more
01-31-2017
09:34 PM
If you haven't seen any exceptions during the execution I would suggest to check for duplicated rowkeys. If it's not possible to do for the source, you may try to do it for HBase table itself. I would recommend to use ruby function in HBase shell: def count_versions(tablename, num, args = {})
table = @shell.hbase_table(tablename)
# Run the scanner
scanner = table._get_scanner(args)
count = 0
iter = scanner.iterator
# Iterate results
while iter.hasNext
row = iter.next
i = row.listCells.count
if i > num
count += 1
end
end
# Return the counter
return count
end and execute in in the way: count_versions 'X', 10, {RAW =>true, VERSIONS=>3} where 10 is the expected number of cells for each row which depends on your data schema. So if the regular row has 10 cells, the one that was overwritten will have 20. Function will return the number of such rows.
... View more
01-31-2017
07:00 PM
1 Kudo
Usually you need just migrate your HBase tables (using copyTable utility for example). During the first run of sqlline all required steps for upgrade will happen automatically. A manual steps with running psql is required if you have conditions that are described in section Phoenix-4.5.0 Release Notes at https://phoenix.apache.org/release_notes.html
... View more