Member since
02-18-2016
141
Posts
19
Kudos Received
18
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5089 | 12-18-2019 07:44 PM | |
5119 | 12-15-2019 07:40 PM | |
1802 | 12-03-2019 06:29 AM | |
1821 | 12-02-2019 06:47 AM | |
5787 | 11-28-2019 02:06 AM |
08-27-2022
03:01 PM
Hi Team, We are using Jmeter to submit job (1300/hr) to hbase/phoenix. HDP3.1.4 and Phoenix 5.0 Job starts failing with below error - 2022-08-25 16:21:44,785 INFO org.apache.phoenix.iterate.BaseResultIterators: Failed to execute task during cancel java.util.concurrent.ExecutionException: org.apache.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.exceptions.ScannerResetException: Scanner is closed on the server-side at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3468) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42002) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:131) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Caused by: org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out of range for Get on HRegion OBST:DOCUMENT_METADATA,\x0C\x00\x00\x00,1659594973530.146ed04497483dae508d10d1e2676a12., startKey='\x0C\x00\x00\x00', getEndKey()='\x0CADELMWSQRP\x004bcdbe31987c05d9e88cba377df31f3bbaae274d7df670ed26690fb021c90f5b\x00PERSISTENT', row='\x0CADELSRD\x009bb7104f2f156cec8ecb0e53f95b72affa43969125732ab898c96282356999f7\x00PERSISTENT' at org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:5713) at org.apache.hadoop.hbase.regionserver.HRegion.prepareGet(HRegion.java:7297) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7290) at org.apache.phoenix.util.IndexUtil.wrapResultUsingOffset(IndexUtil.java:514) at org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:197) at org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77) at org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77) at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:274) at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3136) at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3385) ... 5 more at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at org.apache.phoenix.iterate.BaseResultIterators.close(BaseResultIterators.java:1439) at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1352) at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1239) at org.apache.phoenix.iterate.MergeSortResultIterator.getMinHeap(MergeSortResultIterator.java:72) at org.apache.phoenix.iterate.MergeSortResultIterator.minIterator(MergeSortResultIterator.java:93) at org.apache.phoenix.iterate.MergeSortResultIterator.next(MergeSortResultIterator.java:58) at org.apache.phoenix.iterate.DelegateResultIterator.next(DelegateResultIterator.java:44) at org.apache.phoenix.iterate.LimitingResultIterator.next(LimitingResultIterator.java:47) at org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:805) at org.apache.calcite.avatica.jdbc.JdbcResultSet.frame(JdbcResultSet.java:148) at org.apache.calcite.avatica.jdbc.JdbcResultSet.create(JdbcResultSet.java:101) at org.apache.calcite.avatica.jdbc.JdbcMeta.execute(JdbcMeta.java:887) at org.apache.calcite.avatica.remote.LocalService.apply(LocalService.java:254) at org.apache.calcite.avatica.remote.Service$ExecuteRequest.accept(Service.java:1032) at org.apache.calcite.avatica.remote.Service$ExecuteRequest.accept(Service.java:1002) at org.apache.calcite.avatica.remote.AbstractHandler.apply(AbstractHandler.java:94) at org.apache.calcite.avatica.remote.ProtobufHandler.apply(ProtobufHandler.java:46) at org.apache.calcite.avatica.server.AvaticaProtobufHandler.handle(AvaticaProtobufHandler.java:127) at org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52) at org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) at org.apache.phoenix.shaded.org.eclipse.jetty.server.Server.handle(Server.java:539) at org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333) at org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) at org.apache.phoenix.shaded.org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283) at org.apache.phoenix.shaded.org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108) at org.apache.phoenix.shaded.org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) at org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) at org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) at org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) at org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) at org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.exceptions.ScannerResetException: Scanner is closed on the server-side at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3468) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42002) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:131) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Caused by: org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out of range for Get on HRegion OBST:DOCUMENT_METADATA,\x0C\x00\x00\x00,1659594973530.146ed04497483dae508d10d1e2676a12., startKey='\x0C\x00\x00\x00', getEndKey()='\x0CADELMWSQRP\x004bcdbe31987c05d9e88cba377df31f3bbaae274d7df670ed26690fb021c90f5b\x00PERSISTENT', row='\x0CADELSRD\x009bb7104f2f156cec8ecb0e53f95b72affa43969125732ab898c96282356999f7\x00PERSISTENT' Same time we tried to check "select count(*)" with and without index but it gives difference as shown below - NOTE: Below output is from test cluster where we were able to repro issue. View name might differ in below screenshot - We suspected below apache bug for "WrongRegionException: Requested row out of range for Get on HRegion" - https://issues.apache.org/jira/browse/PHOENIX-3828 For "select count(*)" mismatch - we suspected we are hitting - [PHOENIX-6090] Local indexes get out of sync after changes for global consistent indexes - ASF JIRA (apache.org) Can someone help on debugging steps?
... View more
Labels:
- Labels:
-
Apache Phoenix
12-18-2019
10:37 PM
Hi @Daggers Please feel free to select best answer if your questions are answered to close the thread. Thanks
... View more
12-18-2019
07:44 PM
Hi @Daggers You can write simple script using yarn rest api to fetch only completed applications [month/daywise] and copy only those applications from hdfs to local. Please check below link - https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html
... View more
12-15-2019
10:52 PM
@Daggers You can also check for HDFS NFS gateway which will allow hdfs filesystem to mount on local OS exposed via NFS. https://hadoop.apache.org/docs/r2.8.0/hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html
... View more
12-15-2019
07:40 PM
1 Kudo
Hi @Daggers I think you can try this - 1. Below properties decides the path for storing yarn logs in hdfs - Belos is sample example from my cluster -
yarn.nodemanager.remote-app-log-dir = /app-logs
yarn.nodemanager.remote-app-log-dir-suffix = logs-ifile 2. You can do "hadoop dfs -copyToLocal" for above path which will copy all applications to local and then you can pass to splunk ? Do you think that can work for you? Let me know if you have more questions on above.
... View more
12-06-2019
01:11 AM
Hi @pdev Login to the host and execute below command - for a in /*; do mountpoint -q -- "$a" || du -s -h -x "$a"; done command which will ignore all mounts and only give size details of filesystem/dirs which resides under "/" You can check and delete data accordingly.
... View more
12-03-2019
06:29 AM
Hi @Peruvian81 there is no such option in ambari UI You can either check from Namenode UI --> datanode tab and see if the block counts are increasing.
... View more
12-02-2019
06:47 AM
Hi @Peruvian81 Once you add new datanode to cluster and if the replication starts you can see messages somehow like below in datanode logs - which signifies that new node is finalizing blocks written as well as receiving blocks from source node within replication. DataNode.clienttrace (BlockReceiver.java:finalizeBlock(1490)) - src: /<IPADDRESS>:45858, dest: /<IPADDRESS>:1019, bytes: 7526, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-646394656_1, offset: 0, srvID: 973c1ebc-7c88-4163-aea3-8c2e0f4f4975, blockid: BP-826310834-<IPADDRESS>-1480602348927:blk_1237811292_164146312, duration: 9130002
datanode.DataNode (DataXceiver.java:writeBlock(669)) - Receiving BP-826310834-<IPADDRESS>-1480602348927:blk_1237811295_164146315 src: /<IPADDRESS>:36930 dest: /<IPADDRESS>:1019
... View more
11-29-2019
01:38 AM
Hi @laplacesdemon Than you for the response and appreciation. I will be happy to contribute and share my experiences gong further. Thank you for accepting the answer.
... View more
11-28-2019
10:51 PM
@Manoj690 Can you remove password from previous comment. Just to avoid escalation of security standards. Can you share the commands you executed previously?
... View more