Member since
07-17-2019
738
Posts
433
Kudos Received
111
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3473 | 08-06-2019 07:09 PM | |
| 3670 | 07-19-2019 01:57 PM | |
| 5194 | 02-25-2019 04:47 PM | |
| 4666 | 10-11-2018 02:47 PM | |
| 1768 | 09-26-2018 02:49 PM |
09-22-2016
02:30 PM
2 Kudos
Despite the full name, ServerRpcControllerFactory is actually a class from Apache Phoenix, not HBase. Try including /usr/hdp/current/phoenix-client/phoenix-client.jar on your classpath.
... View more
09-15-2016
02:47 PM
Again, your issue is unrelated to this one. Please stop piggy-backing on other issues and create one for yourself.
... View more
09-14-2016
03:49 PM
You have a completely different error, @Ashish Gupta Please create your own question for this issue. It is related to your classpath.
... View more
09-11-2016
07:08 PM
Odd that saving it as a text file doesn't cause an error, but glad you got to the bottom to it. If you want/need any more context, I tried to capture some information on maxClientCnxns recently in https://community.hortonworks.com/articles/51191/understanding-apache-zookeeper-connection-rate-lim.html
... View more
09-08-2016
03:09 PM
The ZooKeeper exceptions and log messages seem to imply that either your client is having difficulty maintaining its heartbeat with the ZK server or the ZK server is having trouble responding. The "Connection reset by peer" is likely the ZK server just closing the connection from the client (probably to save resources), and not directly indicative of a problem. The ConnectionLoss warning might be a sign of a large problem. Disconnected is a state in the ZK client's lifecycle in which the client is no longer in sync with the ZK server, but has the ability to re-connect to the server and sync its state. Commonly, this happens when a client application experiences garbage collection pauses which prevent the ZK heartbeat threads from running as intended. I'd recommend that you try to determine what your Spark job is doing at the time that "it rapidly gets stuck processing". You could obtain a thread dump from the application to see what it is presently doing. Perhaps, obtain GC logs for the application via setting some system properties for JVM to pick up.
... View more
09-06-2016
03:36 PM
The exception message is already telling you what to do: "Current heap configuration for MemStore and BlockCache exceeds the
threshold required for successful cluster operation. The combined value
cannot exceed 0.8. Please check the settings for
hbase.regionserver.global.memstore.upperLimit and hfile.block.cache.size
in your configuration." The sum of hbase.regionserver.global.memstore.upperLimit and hfile.block.cache.size in hbase-site.xml cannot exceed 0.8.
... View more
09-06-2016
03:08 PM
"What happens is that data goes into a single region initially and the
region goes way beyond the split threshhold (10GB or R^2*flush size -
they are using default split policy), I saw a region big as 2.2T with
constant compactions that take 4-5 hrs." This seems very bad. There should be back-pressure (e.g. max number of files or something) that prevents a region from growing this large without a split happening.
... View more
09-02-2016
03:33 PM
The Phoenix Query Server (PQS) is an optional service. You only need to add it if you intend to use it. PQS adds additional levels of connectivity into Apache Phoenix/HBase, but is not required for Phoenix access. Like Sunile points out, you can easily add this service to your installation later as the need/desire arises.
... View more
09-01-2016
04:38 PM
1 Kudo
Yes. The Phoenix JDBC does not require the Phoenix Query Server. The Phoenix Query Server provides a "thin" JDBC driver.
... View more