Member since
11-14-2015
268
Posts
122
Kudos Received
29
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2681 | 08-07-2017 08:39 AM | |
4328 | 07-26-2017 06:06 AM | |
9926 | 12-30-2016 08:29 AM | |
7824 | 11-28-2016 08:08 AM | |
7686 | 11-21-2016 02:16 PM |
02-23-2017
05:25 PM
It seems your command is incorrect, you need to put ":" between port and znode. hadoop jar /usr/hdp/current/phoenix-client/phoenix-client.jar org.apache.phoenix.mapreduce.CsvBulkLoadTool -Dfs.permissions.umask-mode=000 --table TEST.FLIGHT_SEGMENT --input /tmp/test/Segments.tsv -z devone1.lab.com:2181:/hbase-unsecure
... View more
12-30-2016
08:32 AM
These properties(phoenix.schema.isNamespaceMappingEnabled & phoenix.schema.mapSystemTablesToNamespace) should not be rolledback. https://phoenix.apache.org/namspace_mapping.html
... View more
12-30-2016
08:29 AM
1 Kudo
I don't know the best way to include hbase-site.xml in squirrel classpath , but people have tried by putting hbase-site.xml in phoenix-client.jar and it seems work for them. https://distcp.quora.com/Connect-and-query-Apache-Phoenix-with-Squirrel-from-Windows https://mail-archives.apache.org/mod_mbox/phoenix-user/201409.mbox/%3CCAF1+Vs8TMeSeUUWS-b7FYkqNgxdrLWVVB0uHQW5fVS0XQPUp+Q@mail.gmail.com%3E
... View more
12-30-2016
06:01 AM
Thanks @Gabriela Martinez for sharing, would you mind creating a separate question by tagging Phoenix and NIFI and answering and accepting the same, as it will benefit the other users who are using NIFI with Phoenix.
... View more
12-18-2016
08:15 AM
If you know that all rows of the table have same no. of columns then you can just get first row (with scan and limit) and parse the columns names for each column family. otherwise @Sergey Soldatov answer is the only way.
... View more
12-14-2016
05:16 AM
What benefit you are expecting by archiving hbase data like "Hadoop Archive"? or Is your purpose to just archive HBase data in any form?
... View more
12-01-2016
07:12 AM
it's great that it works for you..so can you accept the answer now so that it will be helpful for other users.
... View more
11-30-2016
11:05 AM
1 Kudo
it is in hdfs and if you are using HDP then it may be under /apps/hbase/data/data/<namespace>/<tableName>/.tabledesc/
... View more
11-30-2016
10:28 AM
2 Kudos
Table metadata is stored as table descriptor in the corresponding table directory and is read and altered there itself. I don't think that we have any znode where we keep the information of columnfamily during alter or create table.
... View more
11-29-2016
09:42 AM
I don't see much problem with the code. can you try adding a debug point in TotalOrderPartitioner.setConf() // line 88 or something and see why split points are different while reading from partition file.
... View more