Member since
07-17-2019
738
Posts
433
Kudos Received
111
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3473 | 08-06-2019 07:09 PM | |
| 3672 | 07-19-2019 01:57 PM | |
| 5201 | 02-25-2019 04:47 PM | |
| 4668 | 10-11-2018 02:47 PM | |
| 1768 | 09-26-2018 02:49 PM |
12-12-2016
05:21 PM
You likely do not want to distribute your keytab to all nodes as this is the same principle as lending out the keys to your house and trusting no one will use them maliciously. If that would ever be compromised, you now have to change all of your locks. HBase supports delegation token authentication which lets you acquire a short-lived "password" using your Kerberos credentials before submitting your task. This short-lived password can be passed around with greater confidence that, if compromised, it can be revoked without affecting your 'master' keytab. In the pure-MapReduce context, you can do this via invoking TableMapReduceUtil.initCredentials(jobConf); My understanding is that this works similarly for Spark, but I don't have a lot of experience here. Hope it helps.
... View more
12-12-2016
02:31 AM
2 Kudos
One quirk of Apache Phoenix when compared to traditional RDBMS is that Phoenix provides no notion of simple username/password based authentication. This largely stems from Apache HBase, which Phoenix is built on, also not providing this as a form of authentication. With the introduction of the Phoenix Query Server, we have a number of new means which can be used to interact with Phoenix. We also have the ability to hook together new systems to provide features, like username/password authentication, which are not traditionally supported.
There are multiple products available which can perform this kind of authentication, but we can trivially show that this works via a common HTTP load balancer, HAProxy. Let's assume that we have the Phoenix Query Server running on our local machine listening on the standard 8765 port. We can enable some trivial authentication using HAProxy. First, we need to create our HAProxy configuration file.
global
maxconn 256
defaults
mode http
option redispatch
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
userlist AvaticaUsers
user josh insecure-password secret
frontend avatica-http-in
bind *:9000
default_backend avaticaservers
backend avaticaservers
balance source
server queryserver1 127.0.0.1:8765 check
acl AuthOkay http_auth(AvaticaUsers)
http-request auth if !AuthOkay
The above contents can be placed into a file and then should be referenced when starting HAProxy (e.g. `haproxy -f my_auth.conf`). The result will be HAProxy listening on port 9000 and applying HTTP Basic authentication to requests before they are dispatched to the backend PQS. This example will only accept the username password combination of "josh" and "secret". Using an external authentication is left as an example to the user.
With the changes presently staged in PHOENIX-3517, we can easily connect to PQS, via HAProxy, using our username/password and then HTTP Basic authentication method.
./sqlline-thin.py -a BASIC --auth-user=josh --auth-password=secret http://localhost:9000
Similarly, using a username or password that doesn't match the configuration would result in the client receiving an HTTP/403 error and being unable to access Phoenix.
This example can be extrapolated to relevant technology like Apache Knox which provides a fully-featured authentication-gateway service and shows how we can bring username/password authentication to Apache Phoenix in the near future.
... View more
Labels:
12-07-2016
08:48 PM
Your terminology is off, but the explanation seems plausible :). 2888-3888 is the range used by ZK internal communication (ZK servers talking to each other). I can imagine that if ZK servers couldn't communicate with each other, ZK would not operate as expected. SASL is just way of performing authentication and has nothing to do with the low-level transport over the wire.
... View more
12-07-2016
04:23 PM
Check the HBase master log for additional information about the ZooKeeper Kerberos login. You should see information shortly after the process starts which prints the ticket lifetime information. There may be other exceptions in the log about failure to login to Kerberos that result in this znode creation failing.
... View more
12-06-2016
03:40 PM
Caused by: java.net.ConnectException: Connection refused at
sun.nio.ch.Net.connect0(Native Method) ~[na:1.8.0_77] at
sun.nio.ch.Net.connect(Net.java:454) ~[na:1.8.0_77] at
sun.nio.ch.Net.connect(Net.java:446) ~[na:1.8.0_77] at
sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648)
~[na:1.8.0_77] at
java.nio.channels.SocketChannel.open(SocketChannel.java:189)
~[na:1.8.0_77] at
org.apache.nifi.distributed.cache.client.StandardCommsSession.<init>(StandardCommsSession.java:49)
~[na:na] at
org.apache.nifi.distributed.cache.client.DistributedMapCacheClientService.createCommsSession(DistributedMapCacheClientService.java:234)
~[na:na] at
org.apache.nifi.distributed.cache.client.DistributedMapCacheClientService.leaseCommsSession(DistributedMapCacheClientService.java:249)
Actually, it looks like you have intra-NiFi problems. This is not trying to connect to HBase which you can verify from the stack trace.
... View more
12-05-2016
08:26 PM
It's something that is asked now and again, but, no, I am not aware of any concrete plans to integrate them. It would be helpful to identify what is actually being asked for as, presently, the only REST API out of Accumulo is the Accumulo monitor which provides inside into a cluster. This is no client API over REST which is what Knox would traditionally be integrating with. Understanding what kind of integration you're asking for would be helpful. Thanks!
... View more
11-29-2016
04:59 PM
Depending on the size of the data you want to export, you can just run a normal query. SELECT col1 || ',' || col2 || ',' || col3 from my_table;
... View more
11-28-2016
02:49 PM
Yup! This is it. The client is trying to use JSON but the server is configured to use PROTOBUF. If you want to use JSON (which I do not recommend), set phoenix.queryserver.serialization=JSON in hbase-site.xml and restart the Phoenix Query Server.
... View more
11-22-2016
11:39 PM
Sorry, yes, it looks like I was mistaken. In this version of HBase, the ProtobufUtil class is included in hbase-client.jar which is included in the phoenix-client.jar. Have you actually verified that the jar is on the PQS's classpath? The thin-client is only displaying the error that the server returned. It is not encountering the error directly. Also, you should really create your own question to address this issue instead of piggy-backing on this issue. They are not the same.
... View more