Member since
07-17-2019
738
Posts
433
Kudos Received
111
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3979 | 08-06-2019 07:09 PM | |
| 4084 | 07-19-2019 01:57 PM | |
| 5818 | 02-25-2019 04:47 PM | |
| 4952 | 10-11-2018 02:47 PM | |
| 1967 | 09-26-2018 02:49 PM |
12-23-2015
04:29 AM
You should use the same version of the dependency that you are running your code against: 4.2.0.2.2.4.2-2. For compile-time, you would want to use phoenix-core which will transitively include the necessary artifacts. http://repo.hortonworks.com/content/groups/public/org/apache/phoenix/phoenix-core/4.2.0.2.2.4.2-2/ is the specific version of the artifact. You can configure this as a repository in your ~/.m2/settings.xml. Follow this Maven guide for more information https://maven.apache.org/guides/mini/guide-multiple-repositories.html
... View more
12-23-2015
04:05 AM
Can you share the entire exception (with stack trace), please? I ran the parsing code from HDP 2.2.4.2 by hand with the URL you provided and it correctly parsed out the tokens: [d001.unix.gsm1900.org,d002.unix.gsm1900.org,d003.unix.gsm1900.org, 2181, /hbase-secure, srvc@DEFAULT_DEV.com, /home/myname/krb/hdpsrvc.keytab]
Make sure to double check that the URL you provided here is also the same as the one in your code.
... View more
12-23-2015
12:03 AM
3 Kudos
The JDBC URL for Phoenix with Kerberos is of the form: jdbc:phoenix:<Zookeeper_host_name>:<port_number>:<secured_Zookeeper_node>:<principal_name>:<HBase_headless_keytab_file> You should only need the phoenix-client.jar on the classpath for your Java application for dependencies. You will likely also need the Hadoop and HBase configuration file directories (/etc/hadoop/conf and /etc/hbase/conf) on your classpath for Kerberos authentication to work properly.
... View more
12-18-2015
07:48 PM
1 Kudo
Hey
@David Streever
I think you have the ordering of keytab and principal in the URL reversed. Per the
code, it reads the principal and then the keytab.
... View more
12-17-2015
05:13 PM
'/hbase-unsecure' is just the default ZooKeeper parent node created by Ambari when running without Security (Kerberos enabled). If you want to change this value, you can do so by the Ambari configuration UI for HBase and then restart HBase.
... View more
12-15-2015
10:46 PM
2 Kudos
Trying providing an argument of "localhost:2181:/hbase-unsecure" instead of "localhost"
... View more
12-10-2015
06:45 PM
Great! Glad to hear it's working for you now. I'd encourage you to up-vote and/or accept my answer as correct so other people know for the future.
... View more
12-09-2015
08:12 PM
3 Kudos
It looks like you have the value of the configuration property "hbase.local.dir" set in hbase-site.xml to "/data0/hadoop/hbase/local". The code checks to see if this directory exists, and, if it does not, creates it. If you are on a Linux system, it is likely that your client does not have permission to write to the root of the filesystem ("/"). The default value for "hbase.local.dir" is "/tmp/hbase-local-dir". You could consider setting a value for this property to a directory within "/tmp" as it should be writable by any user; however, any writable directory by the user running your code should be sufficient.
... View more
12-09-2015
07:22 PM
1 Kudo
Instead of changing the default ZooKeeper node for HBase, you could (should) include /etc/hbase/conf/hbase-site.xml in the classpath of your application. This will help the HBase libraries find the correct location in ZooKeeper for your HBase instance.
... View more
11-06-2015
06:26 PM
It should work even without Kerberos. I'm not sure how the Ranger authorization fits into the picture.
... View more
- « Previous
- Next »