Support Questions
Find answers, ask questions, and share your expertise

ConsumeAMQP performance issue - less than 50 msgs /sec

ConsumeAMQP performance issue - less than 50 msgs /sec

I am testing the ConsumeAMQP processor. It could not process more than 50 msgs /sec. I tried increasing the Concurrent Tasks from 1 to 10, java Xmx from 24m to 1000m, nifi.bored.yield.duration from 10 millis to 2 millis. But it does not improve at all. Basically with this kind of performance, it is not usable in production environment at all. Does anybody have any benchmark number? What else can I try? Thanks


Re: ConsumeAMQP performance issue - less than 50 msgs /sec

Hi @lightsail pro,

Could you give more details about your setup? What is the AMQP broker implementation used? What is the size of your messages?

I just installed RabbitMQ on my laptop with no specific configuration and created a test queue. I created a basic workflow to make a quick benchmark. Here is a screenshot of the workflow:


Message Size PublishAMQP processor rate ConsumeAMQP processor rate
20B 5700+ messages / second 2700+ messages / second
100B 5000+ messages / second 2000+ messages / second
1KB5000+ messages / second 2000+ messages / second
1MB150+ messages / second 130+ messages / second

An example of the rates I get for 1KB messages (publish = yellow, consume = red):


As a side note, when stopping the publish part of my workflow, the consume rate is increasing since there is less pressure on the queue on RabbitMQ side.

It is important to note that when dealing with large messages such as 1MB messages, it is important to understand how are stored the messages on broker's side (memory? disks?) as well as looking at the acknowledgment mechanism that will create an overhead.

In any case, unless you are dealing with large messages the rate you are reporting is strange.

Hope this helps.

Re: ConsumeAMQP performance issue - less than 50 msgs /sec


@Pierre Villard

We have the same performance issue, one processor reachs the limit around 200 mgs / second with 2 or 10 Concurrent Tasks.

If we add more processors we can multiply the consume with 2 processors ~400mgs/second and with 5 processors 900mgs/second.

Each message size is 2kb

# JVM memory settings java.arg.2=-Xms4g java.arg.3=-Xmx4g

We are 3 nodes in the cluster (8cpu and 16G memory)

There is something wrong on this processor ?

Re: ConsumeAMQP performance issue - less than 50 msgs /sec

I have RabbitMQ 3.6.1 on AWS with Nifi (1.0) local (Windows). I modified the ConsumeAMQP to use TLS1.2 protocol directly for authentication. Our server does not work with the default SSL context service for whatever reason. For test, I pre-load the RabbitQueue with about 50 thousands messages (avg 2.5 K each). Then start ConsumeAMQP + LogAttribute, and watch the message consume rate from the RabbitMQ web interface. With Flume (Rabbit to Kafka), I got about 400 msgs/sec. With Nifi, I got about 44 msgs /sec. It seems that the code is using basicGet, instead of basicConsume API. I will do more test.

Re: ConsumeAMQP performance issue - less than 50 msgs /sec

That is still unreasonably slow. I've also just re-tested locally and getting results similar to Pierre's. So please let us know what else you may dig up.

Yes, that is correct, we are using 'basicGet' which is a 'poll' model as opposed to 'basicConsume' which is a 'push'. There are obviously pros and cons to both. For example; polling is essentially an on demand model where one can control the rate and time of polling, while 'push' passes such responsibility to the consumer which could potentially overwhelm the consumer when the incoming message rate is very high. That said, would you mind raising the JIRA in and request a AMQP processor that relies on 'basicConsume'?



Re: ConsumeAMQP performance issue - less than 50 msgs /sec


Re: ConsumeAMQP performance issue - less than 50 msgs /sec


FYI, this has not yet been picked up for Development.
With NiFi 1.7.0 being baselined shortly, It seems we'll have to wait longer for it to be created.

If you'd like to see this created, I'd appreciate you up-Voting the JIRA issue.