Member since
07-17-2019
738
Posts
433
Kudos Received
111
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3474 | 08-06-2019 07:09 PM | |
| 3673 | 07-19-2019 01:57 PM | |
| 5205 | 02-25-2019 04:47 PM | |
| 4668 | 10-11-2018 02:47 PM | |
| 1771 | 09-26-2018 02:49 PM |
05-10-2017
01:18 AM
"!tables" is a feature of Sqlline, not Phoenix. I don't believe Phoenix has a standard SQL syntax for listing tables (such as `show tables`). You would have to use the DatabaseMetaData API that JDBC provides. I am not sure if Zeppelin exposes that for you. https://docs.oracle.com/javase/7/docs/api/java/sql/DatabaseMetaData.html#getTables(java.lang.String,%20java.lang.String,%20java.lang.String,%20java.lang.String[])
... View more
05-08-2017
02:56 PM
You should look at the HBase Master and RegionServer logs to understand why these regions failed to be assigned.
... View more
05-05-2017
03:10 PM
Something is happening in your datanodes that is causing HBase to mark them as "bad" 2017-05-03 21:22:38,729 WARN [DataStreamer for file /apps/hbase/data/WALs/aps-hadoop5,16020,1493618413009/aps-hadoop5%2C16020%2C1493618413009.default.1493846432867 block BP-1810172115-10.64.228.157-1478343078462:blk_1079562185_5838908] hdfs.DFSClient: Error Recovery for block BP-1810172115-10.64.228.157-1478343078462:blk_1079562185_5838908 in pipeline DatanodeInfoWithStorage[1..1..1..:50010,DS-751946a0-5a6f-4485-ad27-61f061359410,DISK], DatanodeInfoWithStorage[10.64.228.140:50010,DS-8ab76f9c-ee05-4ec0-897a-8718ab89635f,DISK], DatanodeInfoWithStorage[10.64.228.150:50010,DS-57010fb6-92c0-4c3e-8b9e-11233ceb7bfa,DISK]: bad datanode DatanodeInfoWithStorage[1..1..1..:50010,DS-751946a0-5a6f-4485-ad27-61f061359410,DISK]
2017-05-03 21:22:41,744 INFO [DataStreamer for file /apps/hbase/data/WALs/aps-hadoop5,16020,1493618413009/aps-hadoop5%2C16020%2C1493618413009.default.1493846432867 block BP-1810172115-10.64.228.157-1478343078462:blk_1079562185_5838908] hdfs.DFSClient: Exception in createBlockOutputStream
java.io.IOException: Got error, status message , ack with firstBadLink as 10.64.228.164:50010
at org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:140)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1393)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1217)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:904)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:411)
2017-05-03 21:22:41,745 WARN [DataStreamer for file /apps/hbase/data/WALs/aps-hadoop5,16020,1493618413009/aps-hadoop5%2C16020%2C1493618413009.default.1493846432867 block BP-1810172115-10.64.228.157-1478343078462:blk_1079562185_5838908] hdfs.DFSClient: Error Recovery for block BP-1810172115-10.64.228.157-1478343078462:blk_1079562185_5838908 in pipeline DatanodeInfoWithStorage[10.64.228.140:50010,DS-8ab76f9c-ee05-4ec0-897a-8718ab89635f,DISK], DatanodeInfoWithStorage[10.64.228.150:50010,DS-57010fb6-92c0-4c3e-8b9e-11233ceb7bfa,DISK], DatanodeInfoWithStorage[10.64.228.164:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK]: bad datanode DatanodeInfoWithStorage[10.64.228.164:50010,DS-9ba4f08a-d996-4490-b27d-6c8ca9a67152,DISK]
2017-05-03 21:22:44,779 INFO [DataStreamer for file /apps/hbase/data/WALs/aps-hadoop5,16020,1493618413009/aps-hadoop5%2C16020%2C1493618413009.default.1493846432867 block BP-1810172115-10.64.228.157-1478343078462:blk_1079562185_5838908] hdfs.DFSClient: Exception in createBlockOutputStream
java.io.IOException: Got error, status message , ack with firstBadLink as 10.64.228.141:50010
at org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:140)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1393)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1217)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:904)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:411)
I'd look in those datanode logs and figure out why they failed to respond to HBase writing data. It seems like HBase gets down to Datanodes that it can actually talk to (out of your five). In general, your HDFS seems very unstable as it, at one point, took over 70seconds to sync data (this should be a sub-second operation) 2017-05-03 21:22:44,782 INFO [sync.0] wal.FSHLog: Slow sync cost: 72065 ms, current pipeline: [DatanodeInfoWithStorage[10.64.228.140:50010,DS-8ab76f9c-ee05-4ec0-897a-8718ab89635f,DISK], DatanodeInfoWithStorage[10.64.228.150:50010,DS-57010fb6-92c0-4c3e-8b9e-11233ceb7bfa,DISK]]
... View more
05-04-2017
04:14 PM
1 Kudo
Check the Master log -- that is the entity which believes this RegionServer should be dead. Most likely, the HBase master received a notification from ZooKeeper that this RegionServer lost its ZooKeeper node, the Master marked the RegionServer as dead and started re-assigning its regions. However, the RegionServer was yet to realize that it had lost its own lock (classic distributed system problem), and kept trying to perform actions. I would bet that you see a message in the RS log about losing its ZK lock or a SessionExpiration. Commonly, this happens due to a JVM pause from garbage collection. All of this information will be encapsulated in the log -- you will need to confirm that is what happened via reading.
... View more
05-04-2017
02:59 PM
1 Kudo
It would appear from the logs that you only have two datanodes. You don't have any datanodes to replace, therefore this property can't actually do anything. Either stabilize your datanodes, add more datanodes, or reduce the HDFS replication.
... View more
05-04-2017
02:55 PM
Include the directory containing hbase-site.xml on your application's classpath. Phoenix relies on HBase RPCs, and the error is commonly a sign that your client is trying to initiate unsecure RPCs while HBase is expecting Kerberos authentication.
... View more
05-03-2017
03:11 PM
Have you verified that HDFS is healthy? e.g. HDFS FSCK What operation triggers this error? Does the error happen every time you run that operation?
... View more
04-25-2017
03:35 PM
Use the HBCK tool to identify corruption of HBase data in HDFS $ hbase hbck You should include a full log file next time, but it would appear that the data in HDFS is corrupt "Caused by: java.lang.IllegalArgumentException: Need table descriptor". The table descriptor is stored in a file in HDFS.
... View more
04-24-2017
05:43 PM
1 Kudo
The only concern that I can think of off the top of my head is that it may affect any the latency of any application accessing HBase. If increased application latency isn't a concern, you may restart multiple RegionServers at a time.
... View more
04-17-2017
02:33 PM
It sounds like there is no network issue between Master and RegionServer, so you would need to look at the HBase level instead. The error message you provided is only telling you that the Master has seen no RegionServers, but you already knew that because the error message in your question showed that the RegionServer failed to report to the Master. You need to figure out why this report is failing -- perhaps you should look at the DEBUG level logs.
... View more