0
votes

I have configured Jmeter test that has got 1 thread group, with 40 threads, ramp up period of 60 seconds and is scheduled to run for 10 minutes. It consists of a HTTP sampler.

Upon running this, the throughput I get is 52/min. Which means that the request time for each request was under 1.20 seconds.

  1. Now, if I add a constant throughput timer to the thread group of 25/min across all active threads, then, upon completing the test, I get the final throughput as 30/min and the average elapsed time of 5 seconds. Should it not have been 2 seconds (since throughput is 30/min)? Why has the average elapsed time increased when I have reduced the throughput?

  2. When the test is about to end, the elapsed time for the last few requests shoots up to about 15000 miliseconds (where as the usual average elapsed time is under 5000 milisseconds). Why is that?

1

1 Answers

0
votes

It sounds like a memory leak as given the application is under the same load from 1st till 10th minute of the test inclusively response time should remain more or less the same for this time frame.

If response time grows, most probably it means that the application is overloaded, either requests are being queued up or it starts using page file or performs garbage collection.

I would recommend setting up monitoring of the baseline health metrics of your application like CPU, RAM, Network, Disk, Swap usage as it might explain the described behaviour. It can be done using JMeter PerfMon Plugin.

It would also be a good idea to monitor application-specific metrics or even run it under profiling tool telemetry during the load test, this way you will find the slowest/most resources consuming part(s) of your application.

And last but not the least, ensure that JMeter has enough headroom to operate (using the same approach as above) as if you don't follow JMeter Best Practices you may run into the situation when JMeter is not capable of sending requests fast enough causing false negative results.