A customer is having an issue with an ActiveMQ deployment that is blocking their production roll-out. The deployment that causes trouble consists of:
- A producer application that reads many small messages from a large file and sends them to a queue. Each message is 2-3 kbytes long. The number of messages read from a file exceeds 100,000.
- An ActiveMQ broker that hosts the queue that receives those messages. That queue stores messages persistently. The broker is configured to use an amqPersistenceAdapter.
- The client is a Camel route inside a WebSphere container. The route performs business logic on each message received from the queue. The processing of a message takes roughly 1 second. The route uses Camel transactions to ensure complete processing of each message, and consumes messages from the queue in the transacted mode, 1 message per transaction.
To ensure processing of large files within reasonable time frames, the consumer is configured to use 80 threads. Initially, the threads were using a connection pool with maxConnections=10 and maximumActive=120. The customer claims that such configuration resulted in overall processing rate of 60 messages per second. However, messages were getting stuck in the pooled sessions after about 70,000 messages were processed.
I advised them to reduce the pool size to maxConnections=1 and maximumActive=80 (the number of threads) and set the pre-fetch limit to 1. That resolved the problem of stuck messages, but reduced the processing rate to about 13 messages per second. Thread dumps on the client show that most client threads are waiting for the broker to commit their transactions. Broker threads seem to be busy writing to disk. When they disable transactions, the processing rate goes back up, but they cannot do that in production.
Is it possible to tune the broker or client configuration to speed up transaction commits?