Member since
11-17-2017
5
Posts
1
Kudos Received
0
Solutions
12-21-2017
08:52 AM
@bkosaraju. Your explanations makes sense. Thanks for clarifying! My understanding about threshold was different. Robert
... View more
12-20-2017
09:00 PM
Hi, I have a problem with rebalancing HDFS after adding new DataNode to cluter. In my configuration I had 4 DataNodes and added new one (5th). Below is report from dfsadmin [hdfs@snr-prod-master0 ~]$ hdfs dfsadmin -report
Configured Capacity: 21563228579840 (19.61 TB)
Present Capacity: 20460562895805 (18.61 TB)
DFS Remaining: 20290148094909 (18.45 TB)
DFS Used: 170414800896 (158.71 GB)
DFS Used%: 0.83%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
-------------------------------------------------
Live datanodes (5):
Name: 172.17.2.61:50010 (snr-prod-slave1)
Hostname: snr-prod-slave1
Decommission Status : Normal
Configured Capacity: 4312645715968 (3.92 TB)
DFS Used: 35358969856 (32.93 GB)
Non DFS Used: 0 (0 B)
DFS Remaining: 4056646234773 (3.69 TB)
DFS Used%: 0.82%
DFS Remaining%: 94.06%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 12
Last contact: Wed Dec 20 20:52:23 UTC 2017
Name: 172.17.2.64:50010 (snr-prod-slave4)
Hostname: snr-prod-slave4
Decommission Status : Normal
Configured Capacity: 4312645715968 (3.92 TB)
DFS Used: 47864344576 (44.58 GB)
Non DFS Used: 0 (0 B)
DFS Remaining: 4044275077691 (3.68 TB)
DFS Used%: 1.11%
DFS Remaining%: 93.78%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 10
Last contact: Wed Dec 20 20:52:26 UTC 2017
Name: 172.17.2.62:50010 (snr-prod-slave2)
Hostname: snr-prod-slave2
Decommission Status : Normal
Configured Capacity: 4312645715968 (3.92 TB)
DFS Used: 221184 (216 KB)
Non DFS Used: 0 (0 B)
DFS Remaining: 4092407638196 (3.72 TB)
DFS Used%: 0.00%
DFS Remaining%: 94.89%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 6
Last contact: Wed Dec 20 20:52:26 UTC 2017
Name: 172.17.2.65:50010 (snr-prod-slave5)
Hostname: snr-prod-slave5
Decommission Status : Normal
Configured Capacity: 4312645715968 (3.92 TB)
DFS Used: 44406976512 (41.36 GB)
Non DFS Used: 0 (0 B)
DFS Remaining: 4047866664447 (3.68 TB)
DFS Used%: 1.03%
DFS Remaining%: 93.86%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 8
Last contact: Wed Dec 20 20:52:23 UTC 2017
Name: 172.17.2.60:50010 (snr-prod-slave0)
Hostname: snr-prod-slave0
Decommission Status : Normal
Configured Capacity: 4312645715968 (3.92 TB)
DFS Used: 42784288768 (39.85 GB)
Non DFS Used: 0 (0 B)
DFS Remaining: 4048952479802 (3.68 TB)
DFS Used%: 0.99%
DFS Remaining%: 93.89%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 16
Last contact: Wed Dec 20 20:52:23 UTC 2017
And after adding new node to cluster i have run rebalance operation, to distribute data equally, but it says it is balanced (The cluster is balanced. Exiting...) [hdfs@snr-prod-master0 ~]$ hdfs balancer -threshold 5
17/12/20 20:57:36 INFO balancer.Balancer: Using a threshold of 5.0
17/12/20 20:57:36 INFO balancer.Balancer: namenodes = [hdfs://snr-prod-master0:8020]
17/12/20 20:57:36 INFO balancer.Balancer: parameters = Balancer.BalancerParameters [BalancingPolicy.Node, threshold = 5.0, max idle iteration = 5, #excluded nodes = 0, #included nodes = 0, #source nodes = 0, #blockpools = 0, run during upgrade = false]
17/12/20 20:57:36 INFO balancer.Balancer: included nodes = []
17/12/20 20:57:36 INFO balancer.Balancer: excluded nodes = []
17/12/20 20:57:36 INFO balancer.Balancer: source nodes = []
Time Stamp Iteration# Bytes Already Moved Bytes Left To Move Bytes Being Moved
17/12/20 20:57:37 INFO balancer.KeyManager: Block token params received from NN: update interval=10hrs, 0sec, token lifetime=10hrs, 0sec
17/12/20 20:57:38 INFO block.BlockTokenSecretManager: Setting block keys
17/12/20 20:57:38 INFO balancer.KeyManager: Update block keys every 2hrs, 30mins, 0sec
17/12/20 20:57:38 INFO balancer.Balancer: dfs.balancer.movedWinWidth = 5400000 (default=5400000)
17/12/20 20:57:38 INFO balancer.Balancer: dfs.balancer.moverThreads = 1000 (default=1000)
17/12/20 20:57:38 INFO balancer.Balancer: dfs.balancer.dispatcherThreads = 200 (default=200)
17/12/20 20:57:38 INFO balancer.Balancer: dfs.datanode.balance.max.concurrent.moves = 5 (default=5)
17/12/20 20:57:38 INFO balancer.Balancer: dfs.balancer.getBlocks.size = 2147483648 (default=2147483648)
17/12/20 20:57:38 INFO balancer.Balancer: dfs.balancer.getBlocks.min-block-size = 10485760 (default=10485760)
17/12/20 20:57:38 INFO block.BlockTokenSecretManager: Setting block keys
17/12/20 20:57:38 INFO balancer.Balancer: dfs.balancer.max-size-to-move = 10737418240 (default=10737418240)
17/12/20 20:57:38 INFO balancer.Balancer: dfs.blocksize = 134217728 (default=134217728)
17/12/20 20:57:38 INFO net.NetworkTopology: Adding a new node: /default-rack/172.17.2.61:50010
17/12/20 20:57:38 INFO net.NetworkTopology: Adding a new node: /default-rack/172.17.2.60:50010
17/12/20 20:57:38 INFO net.NetworkTopology: Adding a new node: /default-rack/172.17.2.64:50010
17/12/20 20:57:38 INFO net.NetworkTopology: Adding a new node: /default-rack/172.17.2.62:50010
17/12/20 20:57:38 INFO net.NetworkTopology: Adding a new node: /default-rack/172.17.2.65:50010
17/12/20 20:57:38 INFO balancer.Balancer: 0 over-utilized: []
17/12/20 20:57:38 INFO balancer.Balancer: 0 underutilized: []
The cluster is balanced. Exiting...
Dec 20, 2017 8:57:38 PM 0 0 B 0 B 0 B
Dec 20, 2017 8:57:38 PM Balancing took 1.714 seconds
What am i missing? Thanks for reply! Robert
... View more
Labels:
- Labels:
-
Apache Hadoop
12-12-2017
01:42 PM
Hi @Ian Roberts, did you manage to configure queries tracing in phoenix? I am in similar need, and couldn't get it working in Hortonworks HDP 2.6. Thanks in advance for reply! Robert
... View more
12-01-2017
11:53 PM
1 Kudo
Hi guys, I am using HDP cluster (HDP-2.5.0.0) with Hbase version 1.1.2 and Phoenix 4.7.0 and couple days ago we experienced major crash resulting in inconsistency in one of our tables (1.5TB) (+2 index tables as we are using phoenix). I have run hbase hbck and found following results (snippet bellow): ---- Table 'SR_ACTIVITIES': overlap groups
There are 0 overlap groups with 0 overlapping regions
ERROR: Found inconsistency in table SR_ACTIVITIES
---- Table 'hbase:meta': region split map
: [ { meta => hbase:meta,,1.1588230740, hdfs => hdfs://pcluster/apps/hbase/data/data/hbase/meta/1588230740, deployed => hbase-hdp206.lan,16020,1512001442898;hbase:meta,,1.1588230740, replicaId => 0 }, ]
null:
---- Table 'hbase:meta': overlap groups
There are 0 overlap groups with 0 overlapping regions
2017-12-02 00:32:56,453 INFO [main] util.HBaseFsck: Computing mapping of all store files
................................................................................................................................................................
2017-12-02 00:32:57,075 INFO [main] util.HBaseFsck: Validating mapping using HDFS state
ERROR: Found lingering reference file hdfs://pcluster/apps/hbase/data/data/default/SR_ACTIVITIES/3bb33b5f4fba26a9f56dec1aedd402c9/0/e59e308a41194bb5b8299c84b0818dca.d143c81eb14ac29cd4e92681fde8cd7a
ERROR: Found lingering reference file hdfs://pcluster/apps/hbase/data/data/default/SR_ACTIVITIES_INDEX_1/436271a35b6a313ac5fa69db4795e73b/0/d39348bde5134b12b0c79b8382f79a78.18ff59ef5cd0d918247f1419625020ab
ERROR: Found lingering reference file hdfs://pcluster/apps/hbase/data/data/default/SR_ACTIVITIES_INDEX_1/e97a62ccee7217b9492de2efabfcfd3c/0/ad9129b0813a4f4892c51688140f4a03.2594d95578b8946187267112cc0d4098
ERROR: Found lingering reference file hdfs://pcluster/apps/hbase/data/data/default/SR_ACTIVITIES/3bb33b5f4fba26a9f56dec1aedd402c9/0/2705dd256c0e49a39152d7958c074e17.d143c81eb14ac29cd4e92681fde8cd7a
ERROR: Found lingering reference file hdfs://pcluster/apps/hbase/data/data/default/SR_ACTIVITIES_INDEX_1/0eceec2ec0e6e2d732f2b7a028fee2c7/0/6de9e548483e4550adee0c8ed51c7ddc.9695332dd4db41f2f4b98a5da0966eed
ERROR: Found lingering reference file hdfs://pcluster/apps/hbase/data/data/default/SR_ACTIVITIES/f5b32587753648b53b78261887361758/0/2826951b5325407e9027a1c005d5ee93.623d2dc65245bf6360c79a184e527705
ERROR: Found lingering reference file hdfs://pcluster/apps/hbase/data/data/default/SR_ACTIVITIES_INDEX_1/0eceec2ec0e6e2d732f2b7a028fee2c7/0/8f85f60216254247aa2c7c66e29b4d76.9695332dd4db41f2f4b98a5da0966eed
ERROR: Found lingering reference file hdfs://pcluster/apps/hbase/data/data/default/SR_ACTIVITIES_INDEX_1/b688d235f38a437c0454e1b20a9e9e5e/0/36bc175dd8914291a915ea54ab0170d0.394b35ad4f3db41e06aae8d768837098
ERROR: Found lingering reference file hdfs://pcluster/apps/hbase/data/data/default/SR_ACTIVITIES_INDEX_1/436271a35b6a313ac5fa69db4795e73b/0/d9a7948e713b4053b337b8002846a26d.18ff59ef5cd0d918247f1419625020ab
ERROR: Found lingering reference file hdfs://pcluster/apps/hbase/data/data/default/SR_ACTIVITIES_INDEX_1/436271a35b6a313ac5fa69db4795e73b/0/36cc3329b3f14b429d762c9d6b87dc5f.18ff59ef5cd0d918247f1419625020ab
ERROR: Found lingering reference file hdfs://pcluster/apps/hbase/data/data/default/SR_ACTIVITIES/f5b32587753648b53b78261887361758/0/fb8679074112418d9ca9db4b7e506e37.623d2dc65245bf6360c79a184e527705
2017-12-02 00:32:57,078 INFO [main] zookeeper.RecoverableZooKeeper: Process identifier=hbase Fsck connecting to ZooKeeper ensemble=master-hdp202.lan:2181,master-hdp201.lan:2181,master-hdp203.lan:2181
2017-12-02 00:32:57,078 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=master-hdp202.lan:2181,master-hdp201.lan:2181,master-hdp203.lan:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@1c12f3ee
2017-12-02 00:32:57,157 INFO [main-SendThread(master-hdp202.lan:2181)] zookeeper.ClientCnxn: Opening socket connection to server master-hdp202.lan/10.14.0.102:2181. Will not attempt to authenticate using SASL (unknown error)
2017-12-02 00:32:57,158 INFO [main-SendThread(master-hdp202.lan:2181)] zookeeper.ClientCnxn: Socket connection established to master-hdp202.lan/10.14.0.102:2181, initiating session
2017-12-02 00:32:57,159 INFO [main-SendThread(master-hdp202.lan:2181)] zookeeper.ClientCnxn: Session establishment complete on server master-hdp202.lan/10.14.0.102:2181, sessionid = 0x25aae5ddf40428e, negotiated timeout = 90000
2017-12-02 00:32:57,190 INFO [main] zookeeper.ZooKeeper: Session: 0x25aae5ddf40428e closed
2017-12-02 00:32:57,190 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down
2017-12-02 00:32:57,190 INFO [main] zookeeper.RecoverableZooKeeper: Process identifier=hbase Fsck connecting to ZooKeeper ensemble=master-hdp202.lan:2181,master-hdp201.lan:2181,master-hdp203.lan:2181
2017-12-02 00:32:57,190 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=master-hdp202.lan:2181,master-hdp201.lan:2181,master-hdp203.lan:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@47c64cfe
2017-12-02 00:32:57,192 INFO [main-SendThread(master-hdp203.lan:2181)] zookeeper.ClientCnxn: Opening socket connection to server master-hdp203.lan/10.14.0.103:2181. Will not attempt to authenticate using SASL (unknown error)
2017-12-02 00:32:57,193 INFO [main-SendThread(master-hdp203.lan:2181)] zookeeper.ClientCnxn: Socket connection established to master-hdp203.lan/10.14.0.103:2181, initiating session
2017-12-02 00:32:57,196 INFO [main-SendThread(master-hdp203.lan:2181)] zookeeper.ClientCnxn: Session establishment complete on server master-hdp203.lan/10.14.0.103:2181, sessionid = 0x35aae5dde234249, negotiated timeout = 90000
2017-12-02 00:32:57,206 INFO [main] zookeeper.ZooKeeper: Session: 0x35aae5dde234249 closed
2017-12-02 00:32:57,206 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down
2017-12-02 00:32:57,516 INFO [main] util.HBaseFsck: Finishing hbck
Summary:
Table SR_ACTIVITIES is okay.
Number of regions: 445
Deployed on: hbase-hdp201.lan,16020,1512001916143 hbase-hdp202.lan,16020,1512002519632 hbase-hdp203.lan,16020,1512001180484 hbase-hdp204.lan,16020,1512001283591 hbase-hdp205.lan,16020,1512001554528 hbase-hdp206.lan,16020,1512001442898
Table hbase:meta is okay.
Number of regions: 1
Deployed on: hbase-hdp206.lan,16020,1512001442898
15 inconsistencies detected.
Status: INCONSISTENT
2017-12-02 00:32:57,516 INFO [main] client.ConnectionManager$HConnectionImplementation: Closing master protocol: MasterService
2017-12-02 00:32:57,516 INFO [main] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x15ba5e122b09ebb
2017-12-02 00:32:57,517 INFO [main] zookeeper.ZooKeeper: Session: 0x15ba5e122b09ebb closed
2017-12-02 00:32:57,517 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down
I have snapshot from morning before crash, but when I was trying just to copy to another hfs, I am getting FileNotFoundException: 2017-11-30 13:19:54,438 INFO [main] mapreduce.Job: Task Id : attempt_1493215799486_47105_m_000006_2, Status : FAILED
Error: java.io.FileNotFoundException: Unable to open link: org.apache.hadoop.hbase.io.HFileLink locations=[hdfs://pcluster/apps/hbase/data/data/default/SR_ACTIVITIES/623d2dc65245bf6360c79a184e527705/0/fb8679074112418d9ca9db4b7e506e37, hdfs://pcluster/apps/hbase/data/.tmp/data/default/SR_ACTIVITIES/623d2dc65245bf6360c79a184e527705/0/fb8679074112418d9ca9db4b7e506e37, hdfs://pcluster/apps/hbase/data/mobdir/data/default/SR_ACTIVITIES/623d2dc65245bf6360c79a184e527705/0/fb8679074112418d9ca9db4b7e506e37, hdfs://pcluster/apps/hbase/data/archive/data/default/SR_ACTIVITIES/623d2dc65245bf6360c79a184e527705/0/fb8679074112418d9ca9db4b7e506e37]
at org.apache.hadoop.hbase.io.FileLink.getFileStatus(FileLink.java:390)
at org.apache.hadoop.hbase.snapshot.ExportSnapshot$ExportMapper.getSourceFileStatus(ExportSnapshot.java:472)
at org.apache.hadoop.hbase.snapshot.ExportSnapshot$ExportMapper.copyFile(ExportSnapshot.java:255)
at org.apache.hadoop.hbase.snapshot.ExportSnapshot$ExportMapper.map(ExportSnapshot.java:197)
at org.apache.hadoop.hbase.snapshot.ExportSnapshot$ExportMapper.map(ExportSnapshot.java:123)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
However, I am able to find that file but in another region: [hdfs@hbase-hdp201 ~]$ hdfs dfs -find /apps/hbase -name fb8679074112418d9ca9db4b7e506e37 -print
/apps/hbase/data/data/default/SR_ACTIVITIES/f5b32587753648b53b78261887361758/0/fb8679074112418d9ca9db4b7e506e37
[hdfs@hbase-hdp201 ~]$
I believe this is somehow related to region splitting, and I believe that manually moving those file or using hbck can help, but can you help me to find right and secure direction ? Thanks in advance! Robert
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache HBase