Member since
08-27-2021
2
Posts
0
Kudos Received
0
Solutions
12-14-2021
08:07 AM
Hello Team, We are facing Slow ReadProcessor warnings while pulling data from kafka with spark applications. After few slow ReadProcesser warnings, the applications fail. A partial log is attached. Please let us know if you need further information. Please find below warning message,i am frequently i am seeing this logs and also my application taking too long to complete. 2021-12-13 03:25:00 WARN DFSClient:854 - Slow ReadProcessor read fields took 117390ms (threshold=30000ms); ack: seqno: 353 reply: SUCCESS reply: SUCCESS reply: SUCCESS downstreamAckTimeNanos: 778712 flag: 0 flag: 0 flag: 0, targets: [DatanodeInfoWithStorage[10.108.0.18:1019,DS-ec5cff3e-e958-416e-9ad8-de319cfbc28a,DISK], DatanodeInfoWithStorage[10.108.0.106:1019,DS-61163e3d-59ef-4dfc-b194-7385cff86a7c,DISK], DatanodeInfoWithStorage[10.108.0.96:1019,DS-af490217-ef46-4d92-bd6e-78bda82c82dc,DISK]] Thanks & Regards Kallem
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache Spark
10-15-2021
05:28 AM
Hello Team, can someone please help me, i am facing out of memory in spark jobs also find below parameters and also find below configurations. 21/10/11 17:22:53 INFO executor.Executor: Finished task 194.0 in stage 79.0 (TID 14855). 11767 bytes result sent to driver 21/10/11 17:23:34 ERROR executor.CoarseGrainedExecutorBackend: RECEIVED SIGNAL TERM 21/10/11 17:23:34 ERROR executor.Executor: Exception in task 167.0 in stage 79.0 (TID 14825) java.lang.OutOfMemoryError: Java heap spac 21/10/11 17:23:34 INFO storage.DiskBlockManager: Shutdown hook called 21/10/11 17:23:34 INFO util.ShutdownHookManager: Shutdown hook called 21/10/11 17:23:34 INFO executor.Executor: Not reporting error to driver during JVM shutdown. 21/10/11 17:23:34 ERROR util.SparkUncaughtExceptionHandler: [Container in shutdown] Uncaught exception in thread Thread[Executor task la java.lang.OutOfMemoryError: Java heap space at org.apache.spark.sql.catalyst.expressions.UnsafeRow.copy(UnsafeRow.java:502) at org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArray.add(ExternalAppendOnlyUnsafeRowArray.scala:108) Job Configurations details: conf = { "app_name": "CX360", "spark.yarn.queue": "CXMT", "spark.port.maxRetries": 500, "spark.driver.memoryOverhead": 4096, "spark.executor.memoryOverhead": '14g', "spark.driver.memory": "50g", "spark.driver.maxResultSize": 0, "spark.executor.memory": "50g", "spark.executor.instances": 2, "spark.executor.cores": 5, "spark.driver.cores": 5 } Tried with different values but still facing issues, please find yarn queue capacity details Queue Name : CXMT Queue State : running Scheduling Info : Capacity: 8.0, MaximumCapacity: 8.0, CurrentCapacity: 0.8620696 Please do the needfull ASAP. Thanks
... View more
Labels:
- Labels:
-
Apache Spark