Member since
07-17-2019
738
Posts
433
Kudos Received
111
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2615 | 08-06-2019 07:09 PM | |
2853 | 07-19-2019 01:57 PM | |
4041 | 02-25-2019 04:47 PM | |
4028 | 10-11-2018 02:47 PM | |
1343 | 09-26-2018 02:49 PM |
07-20-2017
08:18 PM
You need to install phoenix-server.jar to all Region and Master servers. MetaDataEndpointImpl
... View more
06-21-2017
04:03 PM
1 Kudo
Hi @Sami Ahmad Normally, master services can be spread across the master nodes to ensure proper resource allocation depending on the cluster. If you have two datanodes/worker nodes that you do not want to run master services on, then, no problem, just allocate the host you want and move on to the next step. In Ambari, you click on the Hosts tab to see what services are installed on what host, but, you may need to go through them host by host.
... View more
06-01-2017
05:00 PM
20 Kudos
I was recently involved with, quite possibly, the worst HBase performance debugging issue so far in my lifetime. The issue first arose with a generic problem statement: after X hours of processing, tasks accessing HBase begin to take over 10 times longer than prior. Upon restarting HBase, performance returned to expected levels. There were no obvious errors in HBase logs, HDFS logs, or the host's syslog. This problem would manifest itself on a near-constant period: every X hours after restart. It affected different types of client tasks (those reading and writing), and was not limited to a specific node or set of nodes. Strangely, despite all inspection of HBase logs and profiling information, HBase seemed to be functioning perfectly fine. Just, slower. This lead us to investigate numerous operating system configuration changes and monitoring, none of which completely described the circumstances and symptoms of the problem. After many long days of investigation and some JVM options, we stumbled onto the first answer which satisfied (or, at least, didn't invalidate) the circumstances: a known, unfixed bug in Java 7 in which the JIT code compilation is disabled after the JIT's code cache executes a flush to reclaim space. https://bugs.openjdk.java.net/browse/JDK-8051955 The JIT (just-in-time) compiler runs behind the scenes in Java compiling Java byte-code into native machine code. Code compilation is a tool designed to help long-lived Java applications run fast without negatively affecting the start-up time of short-lived applications. After methods are invoked, they are compiled from Java byte code into machine code and cached by the JVM. Subsequent invocations of a method which are cached can directly invoke the machine code instead of having to deal with Java byte-code. Analysis:
On a 64-bit JVM with Java 7, this cache has a size of 50MB which is a sufficient amount of size for most applications. Methods which are not used frequently are evicted from this cache; this helps avoid the JVM from quickly reaching the limit. However, with sufficient time, this cache can still become full and trigger a temporary halting of JIT compilation and caching to flush the cache. However in Java 7, there is an unresolved issue in that JIT compilation is not re-enabled after the code cache is flushed. While the process continues to run, no machine code will be cached which means that code is constantly being re-compiled from byte code into machine code. We were able to confirm that this is what was happening by enabling two JVM options for the HBase services in hbase-env.sh:
-XX:+PrintCompilation
-XX:+PrintSafepointStatistics
The first option prints a log message for every compilation, every method marked as "not entrant" (the method is candidate to be removed from the cache), and every method marked as "zombie" (removed from the cache). This is helpful in determining when the JIT compilation is happening. The second option prints debugging information about JVM safepoints which are invoked. A JVM safepoint scan be thought of as a low-level "lock" -- the safepoint is taken to provide mutual exclusion at the JVM level. A common use for enabling this option is to analyze the frequency and time taken by garbage collection operations. For example, the concurrent-mark-and-sweep (CMS) collector takes safepoints for various points in its execution. When the code cache becomes full and a flushing event occurs, a safepoint is taken named "HandleFullCodeCache". The combination of these two options can show that a Java process is performing JIT compilation up until the point that the "HandleFullCodeCache" safepoint is executed, and then not further JIT compilation happens after that point. In our case, the time after JIT compilation was not happening was near within one hour of when the tasks reportedly began to see performance issues. In our case, we did not observe the following log message which was meant to make this obtuse issue more obvious. We missed it because we were working remotely and on a decent sized installation which made it not feasible to collect and analyze all logs: Java HotSpot(TM) 64-Bit Server VM warning: CodeCache is full. Compiler has been disabled. Solution: There are two solutions to this problem: one short-term and one long-term. The short-term solution is to increase the size of the JVM Code Cache from the default of 50MB on 64-bit JVMs. This can be accomplished via the -XX:ReservedCodeCacheSize JVM option. Increasing this to a larger value can ultimately prevent the code cache from ever becoming completely full. export HBASE_SERVER_OPTS="$HBASE_SERVER_OPTS -XX:ReservedCodeCacheSize=256m" On HDP releases <=2.6, it is necessary to set HBASE_REGIONSERVER_OPTS variable explicitly instead. export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -XX:ReservedCodeCacheSize=256m" The implication of this configuration is that it would remove available on-heap memory, but this is typically quite minor (100's of MB when we typically consider 1's of GB). The long-term solution is to upgrade to Java 8. Java 7 is long end-of-life'ed by Oracle and this is a prime example of known issues which were never patched in Java 7. It is strongly recommended that any user still on Java 7 have a plan to move to Java 8 as soon as possible. No other changes would be required on Java 8 as it is not subject to this bug.
... View more
10-27-2017
08:16 PM
1 Kudo
select DISTINCT("TABLE_NAME") from SYSTEM.CATALOG; this works from zeppelin , select DISTINCT("TABLE_NAME") from SYSTEM.CATALOG;
... View more
05-08-2017
03:04 PM
Thank you for your fast answer 🙂 Here is a piece from logs: handler.OpenRegionHandler: Failed open of region=tsdb,\x00\x02OX\xB2`\xD0\x00\x00\x01\x00\x97D\x00\x00\x03\x00\x00_\x00\x00\x04\x00\x004,1489718792446.a38ea9c28bd1a11574e831668d80c19f., starting to roll back the global memstore size.
org.apache.hadoop.hbase.DoNotRetryIOException: Compression algorithm 'snappy' previously failed test.
at org.apache.hadoop.hbase.util.CompressionTest.testCompression(CompressionTest.java:91)
at org.apache.hadoop.hbase.regionserver.HRegion.checkCompressionCodecs(HRegion.java:6560)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6512)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6479)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6450)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6406)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6357)
at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:362)
at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:129)
at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
... View more
04-05-2017
04:27 PM
4 Kudos
The Phoenix Query Server is an HTTP server which expects very specific request data data. Sometimes, in the process of connecting different clients, the various configuration options of both client and server can create confusion about what data is actually being sent over the wire. This confusion leads to questions like "did my configuration property take effect" and "is my client operating as I expect". Linux systems often have a number of tools available for analyzing network traffic on a node. We can use one of these tools, ngrep, to analyze the traffic flowing into the Phoenix Query Server. From a host running the Phoenix Query Server, the following command would dump all traffic from any source to the Phoenix Query Server. $ sudo ngrep -t -d any port 8765 The above command will listen to any incoming network traffic on the current host and filter out any traffic which is not to the port 8765 (the default port for the Phoenix Query Server). A specific network interface (e.g. eth0) can be provided instead of "any" to further filter traffic. When connecting a client to the server, you should be able to see the actual HTTP requests and responses sent between client and server. T 2017/04/05 12:49:07.041213 127.0.0.1:60533 -> 127.0.0.1:8765 [AP]
POST / HTTP/1.1..Content-Length: 137..Content-Type: application/octet-stream..Host: localhost:8765..Connection: Keep-Alive..User-Agent: Apache-HttpClient/4.5.2 (Java/1.8.0_45)..Accept-Encoding: gzip,deflate.....?org.apache.calcite.avatica.proto.Requests$OpenConnectionRequest.F.$2ba8e796-1a29-4484-ac88-6075604152e6....password..none....user..none
##
T 2017/04/05 12:49:07.052011 127.0.0.1:8765 -> 127.0.0.1:60533 [AP]
HTTP/1.1 200 OK..Date: Wed, 05 Apr 2017 16:49:07 GMT..Content-Type: application/octet-stream;charset=utf-8..Content-Length: 91..Server: Jetty(9.2.z-SNAPSHOT).....Aorg.apache.calcite.avatica.proto.Responses$OpenConnectionResponse......hw10447.local:8765
## The data above is in ProtocolBuffers which is not a fully-human readable format; however, "string" data is stored as-is which makes reading it a reasonable task.
... View more
Labels:
07-18-2018
07:19 AM
Hi Josh, In the phoenix datatype description ( link ), its mentioned that Phoenix Unsigned data types map to Hbase Bytes.toBytes method . Is there a way to utilize these unsigned data types to map existing Hbase data to Phoenix tables and be able to read the data correctly from Phoenix. I mapped numbers inserted through Hbase Shell to Unsigned_int datatype in phoenix but i was still getting same error that bsaini was getting in the above question. Could you please clarify if we can use Unsigned_Int in the above scenario. Thanks
... View more
03-10-2017
11:26 PM
A common error to see in initial installations is the following from Accumulo TabletServer logs Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /apps/accumulo/data/wal/myhost.mydomain.com+9997/1ff916a2-13d0-4bb7-aa38-c44b69831519 could only be replicated to 0 nodes instead of minReplication (=1). There are 3 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1649)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3198)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3122)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:843)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:500)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552)
at org.apache.hadoop.ipc.Client.call(Client.java:1496)
at org.apache.hadoop.ipc.Client.call(Client.java:1396)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
This exception will be printed repeatedly in the TabletServer logs as Accumulo has no other solution than to try to create its write-ahead log file again. This exception is, indirectly, telling us multiple things about the current state: There are three Datanodes None of the Datanodes were avoided -- this means all three of them should have been able to accept the write None of the Datanodes successfully accepted the write The most common cause of this issue is that each Datanode has a very small amount of disk space to use. When Accumulo creates its write-ahead log files, it sets a large HDFS block size (by default: 1GB). If the Datanode does not have enough free space to store 1GB of data, the allocation fails. When all of the Datanodes are in this situation, you would see the above error message. The solution to the above problem is to provide more storage for the Datanode. Commonly, this is because HDFS is not configured to use the correct data directories or some hard drives were not mounted to the data dirs (and thus the Datanodes are using the root volume).
... View more
Labels:
12-08-2017
09:56 AM
@Josh Elser , We seem to hit the same issue, is this resolved ?
... View more
02-13-2017
04:12 PM
HI Josh, Thank you very much for your reply. Could you take a look at question: https://community.hortonworks.com/questions/83220/how-to-use-knox-to-securely-access-hbase-through-o.html? Thanks!
... View more