Member since
04-20-2016
9
Posts
2
Kudos Received
0
Solutions
09-01-2017
10:55 AM
Sorry for the likely basic question, but I am having a little trouble making sense of the documentation. We are currently running a HDP 2.4 cluster, hosting an unsecured HBase database. All access to HBase is via Phoenix. We connect to Phoenix from our application using the fat client. So, we have no authorization/authentication/etc. We are now looking to start adding auth, and I want to make sure we are looking at the correct tools. We would like the ability to pass a user/password along when establishing our Phoenix connections, so we can begin to provide read-only access to the cluster to some users, and read/write access for others.
Do we need to use Kerberos? We are not currently running this in our setup, and would like to avoid if possible - according to our security folks. No external customers will access our cluster, all access is via our app servers, or for engineers running ad-hoc queries for debugging purposes. It sounds like Ranger would be the tool to handle authorization (via Kerberos) and authentication for HBase. Does that work for Phoenix as well? Is PQS required, or would it work with the fat client as well? Is there something else we should be looking at, maybe a simpler way to create read-only users in our cluster? We are looking at upgrading to the latest stable HDP in the next several months, if that changes anything.
... View more
Labels:
- Labels:
-
Apache Phoenix
03-29-2017
07:10 PM
@Namit Maheshwari Thank you for the quick reply. If I am reading the description of that property correctly, it sounds like a server side setting. Is that a client level setting? For this issue, we are not looking at limiting memory on the HBase cluster (that is a different issue), but instead on our client applications.
... View more
03-29-2017
06:33 PM
We are running HDP 2.4. Our Java application connects to Phoenix using the fat client. While testing our queries in a stand alone query tool (DbVisualizer), some queries will error out when we allocate 4GB to the client, and then work when we up that to 6GB. We know that there may be things we can do to reduce the memory usage of those queries, and we are working on that, but when we move to embed these Phoenix queries within our regular Java app, we need to make sure that a bad Phoenix query does not use up all of our heap and cause problems for the rest of our application. Aside from query timeouts, are there any settings we can enable to limit the amount of memory that the client will use?
... View more
Labels:
- Labels:
-
Apache Phoenix
05-26-2016
11:57 PM
Thanks, I will test that out in the next day or two - I think this is what I was looking for (confirmation that all the metadata I need will be in the data directory).
... View more
05-26-2016
09:26 PM
In our case, it is acceptable for HBase to be stopped before the "backup", so we can be sure to be consistent.
... View more
05-26-2016
09:20 PM
1 Kudo
We are looking for the best way to backup/restore our HBase (Phoenix) databases in some of our development environments. These environments are running a standalone install of HBase, so no HDFS, writes go to the filesystem. I have looked through https://community.hortonworks.com/questions/6584/hbase-table-dump-to-flat-files.html and other documents, and most comments refer to HDFS commands, which obviously don't apply in this case. Can we just zip up the data directory? What about the metadata? We want to be able to "export" the database, and restore it to another environment (or over the existing one) at some point in the future. Having a portable artifact like a RDBMS backup would be ideal.
... View more
Labels:
- Labels:
-
Apache HBase
04-21-2016
02:26 PM
Thank you both for your replies. I too have been unable to find a definitive statement about availability of table/region during major compaction. I understand that there will be impact on IO/CPU and plan on scheduling major compactions on weekends (or other periods of lower activity), but for a 24/7 application, I need to understand if the application will be unavailable/blocked during the minutes(?) of compaction.
... View more
04-20-2016
10:08 PM
1 Kudo
Probably a real simple question, but I can't seem to find the answer. What, if anything, is the impact on availability of a region during HBase maintenance tasks like major/minor compaction, region splits/merges, etc. For example can we read/write to a region while it is doing a compaction, or will that get blocked until the operation has completed? Thank you
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache HBase