Member since
09-29-2015
94
Posts
117
Kudos Received
35
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2483 | 08-04-2017 06:13 PM | |
5491 | 03-21-2017 08:00 PM | |
2529 | 11-30-2016 07:38 PM | |
1292 | 11-03-2016 05:56 PM | |
2708 | 10-11-2016 06:44 PM |
01-26-2016
03:56 AM
3 Kudos
You are getting an Access control exception, because the by default the user is not allowed to create a table. You should give either global level or namespace level privileges to the desired user so that that user can create a table. Out of the box, only the HBase user will have permissions to grant other permissions, so you have to log in as the hbase user. You can check https://hbase.apache.org/book.html#appendix_acl_matrix and the security section in the book.
... View more
01-26-2016
01:02 AM
2 Kudos
It is not possible to use more than one WAL Codec at a time since it is the WALCodec which decides on the binary layout of the cells in WAL files. Phoenix uses the WAL Codec for secondary indexing. It should be possible to run Phoenix without the WAL Codec if secondary indexing in Phoenix is not used.
... View more
01-21-2016
03:15 AM
It seems we have made an explicit decision that getting the table descriptor should only be allowed for A or C permission, while getting the name of the table is allowed for all RWACE privileges. The discussion happened here: https://issues.apache.org/jira/browse/HBASE-12564?focusedCommentId=14234504&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14234504 However, in shell, the "list" command still uses the version that requires A or C. I've opened up a jira to fix this: https://issues.apache.org/jira/browse/HBASE-15147. Feel free to comment there if you want.
... View more
01-20-2016
06:27 PM
4 Kudos
This is a great question. The list command for getting the list of tables or getting the "description" of the tables requires ADMIN or CREATE privileges as of now. The full list of tables is filtered to only return the subset of tables that the user have A or C. There is an alternative master RPC command though to get a list of tables that will return the table name, but not the descriptor if you only have READ or WRITE permissions. I think we need to fix this in HBase itself. Logically, if you have READ or WRITE access to the table, you should be able to get the table descriptor as well.
... View more
01-14-2016
07:14 PM
1 Kudo
It depends on the bucket cache size settings for HBase. If you have configured the bucket cache bigger than 4G (which is all off-heap) you will get this once the cache grows.
... View more
01-14-2016
12:19 AM
1 Kudo
It seems related to the offheap setting. You should look at the -XXMaxDirectMemorySize parameter in hbase-env.sh and make sure that it is large enough to hold the offheap block cache (if configured) and some offset from HDFS client offheap buffers (usually small).
... View more
01-04-2016
08:51 AM
1 Kudo
20K entries per sec is pretty achievable. The TTL feature does not insert DELETE markers. The cells that have expired according to the TTL are filtered from the returned result set. Eventually a compaction will run and those expired entries will not be seen by the compaction runner and thus they won't be written to the new files from compaction. There is also another mechanism, that before compaction, if we can make sure that all the cells in an HFile are expired (by looking at the min and max timestamps of the hfiles), the compaction safely deletes the whole file without even going through compaction. A TTL of 30 mins is very short, so you can also look into strategies for not running ANY compaction at all (since depending on the write rate, there may not be a lot of hfiles even with disabled compactions). The recently introduced FIFO compaction policy (https://issues.apache.org/jira/browse/HBASE-14468) coming in the next version of HDP-2.3 seems like a great fit.
... View more
12-16-2015
12:17 AM
1 Kudo
You have to explicitly list of column names in the CREATE TABLE statement, or you can use dynamic columns at the query time to specify the list of columns for this query.
... View more
12-15-2015
10:48 PM
3 Kudos
You can use the sqlline like this: sqlline.py localhost:2181:/hbase-unsecure
... View more
12-15-2015
08:19 PM
3 Kudos
@Anurag Ramayanapu There are multiple different ways that one can use to get data out of HBase for backup or other purposes. 1. Export / Import: Export tool will export the data using a MR job to sequence files living in any Hadoop-compatible file system. Later, Import tool can be used to import the data back into HBase. More information can be found here: https://hbase.apache.org/book.html#_export 2. Snapshot + ExportSnapshot, ImportSnapshot: Taking a snapshot of an HBase table is a lightweight operation which saves references to actual data files included in the snapshot. After taking a snapshot, one can export the snapshot files to any Hadoop-compatible file system. Exported Snapshot files are in the HBase-native file formats. Snapshot operates at the table level. More information here: https://hbase.apache.org/book.html#ops.snapshots.e... 3. CopyTable: CopyTable can be used to do a live table scan of the table data and insert into another HBase cluster. Notice that copy table needs a live HBase cluster at the sink. This is useful for multi-DC, DR setups. More information here: https://hbase.apache.org/book.html#_copytable 4. Custom MR job + BulkLoad: A custom MR can be written to export the data out of a live cluster. The MR job can use any encoding + output format that is desired (for example TSV, or HFiles). If HFiles are generated using the LoadIncrementalHFiles tool, the data then later can be bulk loaded into a live HBase cluster. If CVS or TSV, ImportTsv tool can be used More information here: https://hbase.apache.org/book.html#mapreduce and https://hbase.apache.org/book.html#_completebulklo... 5. Backup Tool: A native backup tool is in the works that defines a backup command, file formats, and utilities to manage multi-table, full and incremental backups. We are (hopefully) very close to commit the backup patches and make it available in HDP-2.3 series soon. More information here: https://issues.apache.org/jira/browse/HBASE-7912 and https://issues.apache.org/jira/browse/HBASE-14030 Except for backups, the other solutions all work at the table level since in most of the cases, different strategies are needed for different tables. However, it should be possible to write a simple script / tool to get a list of all tables from master and invoke the corresponding tool for each table relatively easily.
... View more