Member since
07-17-2019
738
Posts
433
Kudos Received
111
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3473 | 08-06-2019 07:09 PM | |
| 3671 | 07-19-2019 01:57 PM | |
| 5195 | 02-25-2019 04:47 PM | |
| 4666 | 10-11-2018 02:47 PM | |
| 1768 | 09-26-2018 02:49 PM |
10-24-2016
07:10 PM
To clarify because it's confusing otherwise: despite the missing class being named "org.apache.hadoop.hbase.ipc.controller.ServerRpcControllerFactory", this is an Apache Phoenix class. It has to be in the "org.apache.hbase..." package to access protected API.
... View more
10-21-2016
04:22 PM
2 Kudos
Turns out this was a bug in the Phoenix Query Server (technically Apache Avatica): https://issues.apache.org/jira/browse/CALCITE-1458 The author of the Phoenix-Sharp (C# driver) https://github.com/Azure/hdinsight-phoenix-sharp had a similar problem to what you were saying. What happens is that a change in serialization was not made fully backwards compatible and the attribute which the phoenixdb driver was likely expected (I have not verified this) was no longer being serialized over the wire. As such, the server was sending the data back in an attribute the client was not aware existed. I have a fix I'm putting up on CALCITE-1458 shortly which should address the issue without requiring a change in the client.
... View more
10-21-2016
03:31 PM
Hrm. Curious. Maybe the HBase client API is doing multiple retries before giving up? You can try reducing "Maximum Client Retries" on the Ambari configuration page for HBase from 35 to 5 or 10 to make the client give up and tell you why it was failing/retrying. Just a guess given what you've been able to provide so far.
... View more
10-20-2016
07:34 PM
1 Kudo
Apache Hive and Apache HBase are fundamentally different systems with completely different architectures. As such, which is most efficient really depends on the application use cases. It's impossible to generically state that Hive or HBase is better/worse than the other and the fact that they both use HDFS for storing data is irrelevant. Please quantify the application requirements you have if you'd like an answer about whether Hive or HBase are better for you.
... View more
10-20-2016
03:16 PM
Can you share the full HBase errors/exceptions you see? It's not clear to me from your description whether this should be approached from the Storm side or the HBase side.
... View more
10-19-2016
05:59 PM
1 Kudo
It appears that this is a hard-coded check in the client side HBase Mutation class. I would recommend that you consider why you are creating such a large rowKey in the first place. There is some reading you could also do on the subject: http://hbase.apache.org/book.html#keysize
... View more
10-18-2016
02:45 PM
That is not a good idea. It is not well tested as to how the version of Phoenix provided in HDP2.3/2.4 works with Apache Phoenix 4.8. You are likely on your own there 🙂
... View more
10-17-2016
03:11 PM
This fix may eventually make it into a maintenance release for HDP. If you have a Hortonworks support contract, you can escalate this through them to get a fix made with urgency. The workaround is to provide "true" and "false" when referring to booleans instead of "1" and "0".
... View more
10-17-2016
03:10 PM
"firstFrame":{"offset":0,"done":true,"rows":[]} Is showing that there were no results read from your query. Can you verify that the results are actually present in your table? Also, I do not believe the author is maintaining the Python PhoenixDB library. Please refer to the official Apache Calcite Avatica documentation on how to use the JSON API: http://calcite.apache.org/avatica/docs/json_reference.html
... View more