Thank you very much dbains for the answer and you are right if I change the value for the kafka.consumer.group.id and restart the flume agent it will allow to re-consume/re-read all messages from kafka topic.
(1)But it is inconvenient and might require the manual intervention which might brake the automation process. Let's assume the real time processing in a system with the events going through the Kafka => Flume => HDFS for archiving/audit purposes. And the system should prevent data loss and provide the reconciliation and recovery process. Suppose the scheduled and running periodically script identified the data loss in the system in chain Flume-HDFS. In this case Flume agent need to be stopped, the group.id in config file need to be changed and the flume agent need to be started again to re-consume messages with new group.id. It seems should be something more easier and effective in flume. Or may be it is but I'm not aware of it. If so, please let me know.
(2)Can you please let me know in which section of the Flume User Guide 1.8 there is the statement about "The property kafka.consumer.auto.offset.reset comes into picture when there is no initial offset in Kafka or if the current offset does not exist any more on the server" ?