Member since
10-16-2024
11
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1194 | 10-16-2024 06:28 PM |
08-14-2025
02:37 AM
Hi everyone, I'm using Hadoop 3.1.1 and have encountered an issue: after running the DataNode for a few days, it eventually becomes unresponsive. When inspecting the threads of the DataNode process, I found that many of them are stuck in DataXceiver. I'd like to ask if anyone has encountered this before and if there are any recommended solutions. [root@dn-27 ~]# top -H -p 74042 top - 17:24:14 up 10 days, 4:05, 1 user, load average: 140.45, 114.30, 110.42 Threads: 792 total, 36 running, 756 sleeping, 0 stopped, 0 zombie %Cpu(s): 54.7 us, 38.0 sy, 0.0 ni, 7.2 id, 0.0 wa, 0.0 hi, 0.1 si, 0.0 st KiB Mem : 52732768+total, 22929336 free, 29385958+used, 21053875+buff/cache KiB Swap: 0 total, 0 free, 0 used. 22594056+avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 56353 hdfs 20 0 11.3g 8.0g 34484 R 99.9 1.6 103:13.94 DataXceiver for 72973 hdfs 20 0 11.3g 8.0g 34484 R 99.9 1.6 70:57.77 DataXceiver for 84061 hdfs 20 0 11.3g 8.0g 34484 R 99.9 1.6 60:03.79 DataXceiver for 11326 hdfs 20 0 11.3g 8.0g 34484 R 99.9 1.6 55:46.58 DataXceiver for 15519 hdfs 20 0 11.3g 8.0g 34484 R 99.9 1.6 31:54.12 DataXceiver for 65962 hdfs 20 0 11.3g 8.0g 34484 R 99.7 1.6 74:41.84 DataXceiver for 56313 hdfs 20 0 11.3g 8.0g 34484 R 99.3 1.6 103:09.39 DataXceiver for 11325 hdfs 20 0 11.3g 8.0g 34484 R 99.0 1.6 55:43.29 DataXceiver for 65919 hdfs 20 0 11.3g 8.0g 34484 R 98.7 1.6 74:40.23 DataXceiver for 20557 hdfs 20 0 11.3g 8.0g 34484 R 98.7 1.6 41:18.60 DataXceiver for 10529 hdfs 20 0 11.3g 8.0g 34484 R 98.3 1.6 150:28.54 DataXceiver for 42962 hdfs 20 0 11.3g 8.0g 34484 R 98.3 1.6 120:37.85 DataXceiver for 10488 hdfs 20 0 11.3g 8.0g 34484 R 98.0 1.6 150:26.11 DataXceiver for 11909 hdfs 20 0 11.3g 8.0g 34484 R 98.0 1.6 150:27.20 DataXceiver for 57550 hdfs 20 0 11.3g 8.0g 34484 R 98.0 1.6 142:06.13 DataXceiver for 10486 hdfs 20 0 11.3g 8.0g 34484 R 97.7 1.6 150:26.47 DataXceiver for 73028 hdfs 20 0 11.3g 8.0g 34484 R 97.7 1.6 60:37.69 DataXceiver for 11901 hdfs 20 0 11.3g 8.0g 34484 R 97.4 1.6 150:25.12 DataXceiver for 72941 hdfs 20 0 11.3g 8.0g 34484 R 97.0 1.6 70:55.71 DataXceiver for 10887 hdfs 20 0 11.3g 8.0g 34484 R 97.0 1.6 55:43.40 DataXceiver for 11360 hdfs 20 0 11.3g 8.0g 34484 R 97.0 1.6 55:43.28 DataXceiver for 10528 hdfs 20 0 11.3g 8.0g 34484 R 96.7 1.6 150:27.95 DataXceiver for 11902 hdfs 20 0 11.3g 8.0g 34484 R 96.4 1.6 150:24.02 DataXceiver for 20521 hdfs 20 0 11.3g 8.0g 34484 R 96.0 1.6 41:20.82 DataXceiver for 22369 hdfs 20 0 11.3g 8.0g 34484 R 95.4 1.6 146:25.16 DataXceiver for 10673 hdfs 20 0 11.3g 8.0g 34484 R 95.0 1.6 55:47.24 DataXceiver for 73198 hdfs 20 0 11.3g 8.0g 34484 R 94.7 1.6 60:36.41 DataXceiver for 24624 hdfs 20 0 11.3g 8.0g 34484 R 94.4 1.6 146:16.92 DataXceiver for 20524 hdfs 20 0 11.3g 8.0g 34484 R 94.4 1.6 41:21.80 DataXceiver for 15472 hdfs 20 0 11.3g 8.0g 34484 R 94.4 1.6 31:54.54 DataXceiver for 72974 hdfs 20 0 11.3g 8.0g 34484 R 93.0 1.6 70:59.92 DataXceiver for 42967 hdfs 20 0 11.3g 8.0g 34484 R 92.1 1.6 120:32.41 DataXceiver for 43053 hdfs 20 0 11.3g 8.0g 34484 R 89.7 1.6 118:03.47 DataXceiver for 49234 hdfs 20 0 11.3g 8.0g 34484 R 87.1 1.6 48:41.65 DataXceiver for 43055 hdfs 20 0 11.3g 8.0g 34484 R 85.8 1.6 117:03.03 DataXceiver for 49932 hdfs 20 0 11.3g 8.0g 34484 R 80.8 1.6 48:38.63 DataXceiver for 78139 hdfs 20 0 11.3g 8.0g 34484 S 1.0 1.6 0:37.71 org.apache.hado 80884 hdfs 20 0 11.3g 8.0g 34484 S 0.7 1.6 0:15.24 VolumeScannerTh 74120 hdfs 20 0 11.3g 8.0g 34484 S 0.3 1.6 0:09.30 jsvc The part of jstack is at the following : "DataXceiver for client DFSClient_NONMAPREDUCE_-1324017693_1 at /172.18.0.27:34088 [Sending block BP-354740316-172.18.0.1-1707099547847:blk_2856827749_1783210107]" #278210 daemon prio=5 os_prio=0 tid=0x00007f54481a1000 nid=0x1757c runnable [0x00007f53df2f1000] java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) - locked <0x00000007aa9383d0> (a sun.nio.ch.Util$3) - locked <0x00000007aa9383c0> (a java.util.Collections$UnmodifiableSet) - locked <0x00000007aa938198> (a sun.nio.ch.EPollSelectorImpl) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read(BufferedInputStream.java:265) - locked <0x00000007af591cf8> (a java.io.BufferedInputStream) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:547) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:614) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:152) -- "DataXceiver for client DFSClient_NONMAPREDUCE_-892667432_1 at /172.18.0.17:57202 [Receiving block BP-354740316-172.18.0.1-1707099547847:blk_2856799086_1783181444]" #268862 daemon prio=5 os_prio=0 tid=0x00007f5448ec8000 nid=0x849a runnable [0x00007f53f3c7b000] java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) - locked <0x00000007aaaa2cc8> (a sun.nio.ch.Util$3) - locked <0x00000007aaaa2cb8> (a java.util.Collections$UnmodifiableSet) - locked <0x00000007aaaa2c70> (a sun.nio.ch.EPollSelectorImpl) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
... View more
Labels:
- Labels:
-
HDFS
06-25-2025
01:22 AM
Thank you for your message. Here's the Maven coordinate that we're currently using: <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-common</artifactId> <version>3.1.1.3.1.5.0-152</version> </dependency> However, the repository link it relies on points to the old HDP Maven repository, which is no longer accessible and returns a 404 Not Found error.
... View more
06-23-2025
05:14 PM
Hello, Thank you for your suggestion. I have checked under the hortonworks-repo, but unfortunately I wasn't able to find the required dependency there. I also noticed that the Maven repository links eventually redirect to Hortonworks' internal servers, which are currently inaccessible from my environment. Could you please advise if there's an alternative public repository or method to obtain the dependency? Thanks again for your help.
... View more
06-23-2025
05:13 PM
Hello, Thank you for your response. I have tried accessing the provided URL: http://nexus-private.hortonworks.com:8081/nexus/#browse/search/generic=keyword%3DHDP%203.1.5.0-152 Unfortunately, the site is not reachable from my environment. Could you please check if the link is still valid or if there is an alternative source? Appreciate your support.
... View more
06-22-2025
07:06 PM
Does anyone know where to find the Maven repository for HDP 3.1.5.0-152? I want to compile HDP 3.1.5.0-152, but I can't find the corresponding Maven repository. At https://repo.hortonworks.com/content/repositories/releases/, only dependencies for other HDP versions are available. Thanks in advance.
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
03-30-2025
11:55 PM
Thanks for your detailed answer. I'll try these options you provided in a Test Env. For now, because I am sure that the MOB HFile is missing, I put an empty HFile in that path instead. And the Exception is not showing again when I scan data. By the way, I am curious about why set hbase.mob.file.expired.period can be a Preventive Measures?
... View more
03-24-2025
08:55 PM
I am using HBase 2.1.6. Exception happened As following : hbase(main):001:0> scan 'FPCA_ITEMS_TY_NEW',{STARTROW => '08', LIMIT => 2} ROW COLUMN+CELL org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: java.io.FileNotFoundException: File does not exist: hdfs://ha:8020/apps/hbase/data/archive/data/default/FPCA_ITEMS_TY_NEW/bf92b15900f190730a5482b53d350df0/cf/ab741ac0919480a47353778bda55d142202502239bf346dbbfc6475c8967734c2edfaaf4 at org.apache.hadoop.hbase.regionserver.HMobStore.readCell(HMobStore.java:440) at org.apache.hadoop.hbase.regionserver.HMobStore.resolve(HMobStore.java:354) at org.apache.hadoop.hbase.regionserver.MobStoreScanner.next(MobStoreScanner.java:73) at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:153) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6581) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6745) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6518) at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3404) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42190) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) Caused by: java.io.FileNotFoundException: File does not exist: hdfs://ha:8020/apps/hbase/data/archive/data/default/FPCA_ITEMS_TY_NEW/bf92b15900f190730a5482b53d350df0/cf/ab741ac0919480a47353778bda55d142202502239bf346dbbfc6475c8967734c2edfaaf4 at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1581) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1574) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1589) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454) at org.apache.hadoop.hbase.regionserver.StoreFileInfo.<init>(StoreFileInfo.java:139) at org.apache.hadoop.hbase.regionserver.HStoreFile.<init>(HStoreFile.java:214) at org.apache.hadoop.hbase.mob.CachedMobFile.create(CachedMobFile.java:49) at org.apache.hadoop.hbase.mob.MobFileCache.openFile(MobFileCache.java:220) at org.apache.hadoop.hbase.regionserver.HMobStore.readCell(HMobStore.java:401) ... 13 more I've checked HDFS file path, and the file is not exist truly. How can I resolve this problem? How HBase know what to find file path? Thanks in advance.
... View more
Labels:
- Labels:
-
Apache HBase
12-19-2024
06:44 PM
@Shelton Thank you for your reply. This information is very helpful.
... View more
11-25-2024
04:39 PM
1 Kudo
In Apache Spark, spark_shuffle and spark2_shuffle are configuration options related to Spark's shuffle operations, which can be set to start auxiliary services within the Yarn NodeManager. But what is the difference between these two?
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache YARN
10-16-2024
06:28 PM
1 Kudo
Hi everyone, Thank you all for your responses. I am using Spark 3, and I’ve discovered that the issue is due to the improper configuration of the spark_shuffle settings in the yarn-site.xml file. Thanks again!
... View more