0
votes

I executed 1-second latency load test using jmter and ngnix mock server. Load test directly executed for Nginx mock service.

Load testing jmeter details: Users: 250 Ramp up (seconds): 125 Duration: 36000 (10 hrs)

Mock backend details: Nginx server using as mock service. 1 sec latency added (echo_sleep 1)

Results 249.4/s tps

But I executed the same script when backend latency is 0 seconds. Results 276939.2/s tps

  1. Why the tps so much drop when the backend is having 1-second latency? (I used default configuration of the nginx)
  2. How can I calculate the expected tps for n second latency for above jmeter script (user count, Ramp up, Duration, Backend latency)?
1

1 Answers

0
votes

First of all I believe you're using the wrong term, looking into JMeter Glossary

Latency. JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client.

With regards to your observations explanation: JMeter simply tries to execute requests as fast as it can (as fast as JMeter itself can send requests + time needed for the request to travel back and forth + application response time)

  1. When you add 1 second artificial response time delay it means that each virtual user is capable of making only one request per second. That's why you have throughput more or less equal to the number of virtual users
  2. When you remove this 1 second artificial response time delay - JMeter starts executing sampler(s) on top speed and this 276939.2/s tps is the maximum throughput you can reach with 250 virtual users.

In general you should take the following approach:

  1. Make sure to increase the load gradually as only this way you will be able to correlate increasing load with increasing response time, decreasing throughput, etc. Also you will be able to identify saturation and breaking points.
  2. Make sure that your load test is accurately representing your application real life usage, otherwise it doesn't make a lot of sense.
  3. Make sure that JMeter has enough headroom to operate in terms of CPU, RAM, etc. as if JMeter lacks resources - it will not be able to send requests fast enough. You can automatically check JMeter engine health using i.e. PerfMon Plugin. Also ensure that you're following JMeter Best Practices