Member since
07-17-2019
738
Posts
433
Kudos Received
111
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3478 | 08-06-2019 07:09 PM | |
| 3678 | 07-19-2019 01:57 PM | |
| 5208 | 02-25-2019 04:47 PM | |
| 4674 | 10-11-2018 02:47 PM | |
| 1772 | 09-26-2018 02:49 PM |
03-27-2017
04:03 AM
Might you take a moment to "Accept" the answer you found helpful, @Derrick Lin? Thanks.
... View more
03-27-2017
01:14 AM
You want to set hbase.master.ipc.address instead. The info bindAddress is the configuration property for the HTTP web UI server, not the RPC server. The ipc address is the property you want to set to 0.0.0.0. For future reference, see https://community.hortonworks.com/articles/24277/parameters-for-multi-homing.html
... View more
03-23-2017
04:52 PM
It is within your ability to change the JARs on your system, but this is not something we recommend. You will run into issues, and you will be on your own to fix them.
... View more
03-22-2017
09:51 PM
re-read that exception message again: "Login failure for phoenix from keytab zk-host-1,zk-host-2,zk-host-3" JAAS is trying to perform a login using your list of ZooKeeper hosts as the path to a keytab. Pretty sure that isn't right 🙂 If we look at your argument to sqlline.py, we can see the error: jdbc:phoenix:zk-host-1,zk-host-2,zk-host-3:/hbase-secure We can see from the usage statement on sqlline: Usage: sqlline.py [zookeeper] [optional_sql_file]
Example:
1. sqlline.py
2. sqlline.py localhost:2181:/hbase
3. sqlline.py localhost:2181:/hbase ../examples/stock_symbol.sql
4. sqlline.py ../examples/stock_symbol.sql In short: the argument to sqlline is not the full JDBC url, just the list of ZooKeeper hosts and the root znode: zk-host-1,zk-host-2,zk-host-3:/hbase-secure
... View more
03-22-2017
04:10 PM
4 Kudos
"I was under the impression that HBase snapshots stored only metadata without replicating any data" Your impression is incorrect. Snapshot creation is a quick/fast operation as it does not require copying any data. A snapshot is just a reference to a list of files in HDFS that HBase is using. As you continue to write more data into HBase, compactions occur which read old files and create new files. Normally, these old files are deleted. However, your snapshot is referring to these old files. You can't get backups for no-cost, you eventually have to own the cost of storing that data. Please make sure to (re)read the section on snapshots in the HBase book https://hbase.apache.org/book.html#ops.snapshots -- it is very thorough and covers this topic in much more detail than I have. Long-term, you can consider the incremental backup-and-restore work which is on-going as an alternative to snapshots https://hortonworks.com/blog/coming-hdp-2-5-incremental-backup-restore-apache-hbase-apache-phoenix/
... View more
03-21-2017
04:39 PM
Sounds like a bug to me. Good debugging. Please reach out to support for assistance on getting this one fixed.
... View more
03-20-2017
07:23 PM
Swap ":id" with ":key" in the hbase.columns.mapping. Just a simple typo. See https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration for documentation on configuring the HBaseStorageHandler.
... View more
03-20-2017
03:49 PM
Please include the full exception. I would guess that your classpath is wrong, causing this to not find your HBase instance.
... View more
03-20-2017
03:04 PM
1 Kudo
Note how there is a tab ("\t") character provided in step #5. Change this to a comma (",") character and you can read CSV files. You provide the column delimiter to match the data you want to ingest.
... View more
03-19-2017
04:26 PM
1 Kudo
This is yet-another-form of a common question that is asked on this forum: you should only use Phoenix to create tables that you intend to read using Phoenix. Phoenix has a very specific type-serialization approach which it uses. The error you are facing is informing you that the data in the cell does not match its expected serialization. This is because the way in which you created the table using the Hive HBaseStorageHandler is not natively compatible with Phoenix. This is not a Phoenix bug. To accomplish accessibility between Hive and Phoenix, use the PhoenixStorageHandler https://phoenix.apache.org/hive_storage_handler.html which is available in HDP 2.5 and beyond.
... View more