Initially, spark reads the 2000 messages for 1 partition in RDD but after some time it start reading 800 records. that i think is minRate/2. and then it stays static.. In the logs, it always print 1600 as new rate.
2017-01-20 14:55:14 TRACE PIDRateEstimator:67 - New rate = 1600.0
Given my scenario, i have few questions:
spark.streaming.backpressure.pid.minRate is per partition or total no. of messages to be read by batch?
Why reading 800 messages instead 1600 as defined in spark.streaming.backpressure.pid.minRate ??
Any suggested parameters that reduce the input rate when processing takes long and increase back to something close to maxRatePerPartition when processing is very fast? In my example, input rate started with 2000 but when it took long like 20 seconds average, it reduced it to 800 but when 800 messages processed in 3-4 seconds it didn't increase it back to something 1600 or more. This results waste of time and low throughput.