I'm experiencing a strange behaviour from the DetectDuplicate processor.
I want do discard duplicates from a simple flow, using an hash value as the cache entry identifier.
I use CouchBase as a distributed cache server.
All the hash identifiers are correctly written into the CouchBase bucket and the first messages (when the bucket is empty) are correctly identified as non-duplicates.
Anyway, after a few messages, the processor starts to flag every message as duplicate, even though the hash value is not present in the bucket when the message is received by the processor. At the end, the hash value is stored in the bucket anyway.
It looks like the processor is storing the hash in the bucket before checking for duplicates, so that everything is considered a duplicate.