Member since
11-14-2015
268
Posts
122
Kudos Received
29
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3136 | 08-07-2017 08:39 AM | |
5492 | 07-26-2017 06:06 AM | |
11580 | 12-30-2016 08:29 AM | |
9071 | 11-28-2016 08:08 AM | |
10162 | 11-21-2016 02:16 PM |
09-16-2016
07:48 AM
And along with that, just have hbase-site.xml in the class path of spark. you can add hbase-site.xml in spark conf directory of all nodes or add properties needed in spark-defaults.conf. OR(try) spark.driver.extraClassPath=/usr/hdp/current/phoenix-client/phoenix-client-spark.jar:/etc/hbase/conf/hbase-site.xml spark.executor.extraClassPath=/usr/hdp/current/phoenix-client/phoenix-client-spark.jar:/etc/hbase/conf/hbase-site.xml
... View more
09-16-2016
07:45 AM
As an improvement, we could get namespace mapping properties from server at the client so that every client doesn't need to specify them, have raised the jira for the same https://issues.apache.org/jira/browse/PHOENIX-3288
... View more
09-16-2016
07:41 AM
1 Kudo
For trace 1 (if you are using sqlline.py). can you check your <PHOENIX_HOME>/bin directory and remove if there is any hbase-site.xml and try. If you are using any java program, you need to ensure hbase-site.xml in classpath or you are adding these properties while creating connection. For trace 2 (spark job) You need to include hbase-site.xml in the classpath of spark like this:- you can add hbase-site.xml in spark conf directory of all nodes or add properties needed in spark-defaults.conf.
OR(try)
spark.driver.extraClassPath= /usr/hdp/current/phoenix-client/phoenix-client-spark.jar:/etc/hbase/conf/hbase-site.xml
spark.executor.extraClassPath= /usr/hdp/current/phoenix-client/phoenix-client-spark.jar:/etc/hbase/conf/hbase-site.xml
... View more
09-16-2016
07:31 AM
1 Kudo
May be you are hitting following bug :- https://issues.apache.org/jira/browse/PHOENIX-2817 Would you mind trying the workaround mention at the end of the ticket:- For people waiting on this fix there is a very simple workaround provided that you use the default zk port and path.
It's as simple as only listing the the server names "server1,server2" so the plugin builds the url correctly:
jdbc:phoenix:server1,server2:2181:/hbase
Then the delegation tokens setup by spark-submit take care of security so Phoenix doesn't need to do anything with principals or keytabs.
The thing I find a bit confusing is that for other tools the zookeeper quorum URL includes the port and the path, while for Phoenix the zk quorum property is just the server list.
... View more
09-01-2016
12:12 PM
it seems jstack is from different java version than the version on which hbase is running.
check your JAVA_HOME for hbase and use jstack from the path $JAVA_HOME/bin/jstack
... View more
09-01-2016
12:04 PM
You need to execute the command with the same user "hbase"
... View more
09-01-2016
11:01 AM
you can take a thread dump by running "jstack <master processid> > /tmp/hmaster_1.jstack".. take multiple of those during initilization.
... View more
09-01-2016
09:53 AM
can you attach the master logs, take and attach multiple jstack (in internval of 1 min) during initilization of master.
... View more
08-31-2016
06:22 AM
It seems that server adhered this property "hbase.client.scanner.timeout.period" for lease expiration but the client has not . can you check that your hbase-site.xml is in class path of your phoenix client(application/sqlline) as well.
... View more
08-26-2016
01:06 PM
@sankar rao, Actually I don't have the yarn cluster ready to confirm you what log lines needs to be searched in the container logs for the file names. Probably , it will be better if you can raise a support case so that a dedicated team can look into the issue specifically.
As it depends, How is the data loaded in the table/hdfs , how they are zipped, which input format you are using etc.?
... View more