2
votes

My query is when I finish my performance testing and get the result file, I could see that there will be a difference between the Jmeter response time and Server response time. I verified the server response time by checking the server logs.I am not writing any extra elements in the result file also. Can I get an explanation why response time shown by Jmeter is always high when compared to actual response time

1

1 Answers

2
votes

Have you thought about the network? According to JMeter glossary:

Elapsed time. JMeter measures the elapsed time from just before sending the request to just after the last response has been received. JMeter does not include the time needed to render the response, nor does JMeter process any client code, for example Javascript.

Latency. JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client.

Connect Time. JMeter measures the time it took to establish the connection, including SSL handshake. Note that connect time is not automatically subtracted from latency. In case of connection error, the metric will be equal to the time it took to face the error, for example in case of Timeout, it should be equal to connection timeout.

So my expectation is that the server measures only the time which is required to process request and respond while JMeter measures all end-to-end transaction to wit:

  • Establishing the connection (in particular initial SSL Handshake could be very long)
  • Sending packets to the server
  • here server starts measurement
  • Processing the request by the server
  • here server stops measurement
  • Waiting for the first packet to come (Latency)
  • Waiting for the last packet to come (Elapsed time)

The time needed for the request to travel back and forth can really matter, for example if you have a faulty router or not properly configured load balancer even if the actual server response time is low the user experience won't be smooth.