1
votes

i've started distributed performance testing using jmeter. If i give scenario 1:

no.of threads: 10
ramp up period: 1
loop count: 300

Everything runs smooth, as scenario 1 translates to 3000 requests in 300 seconds. i.e. 10 requests per second.

If i give scenario 2:

no.of threads: 100
ramp up period: 10
loop count: 30

Afaik, scenario2 is also executing 3000 requests in 300 seconds, i.e. 10 requests per second.

But things started failing i.e. sever is facing heavy load and requests fail. In theory both scenario1 and scenario2 should be same, right? Am i missing something?

All of these are heavy calls, each one will take 1-2 seconds under normal load.

3

3 Answers

1
votes

In ideal world for scenario 2 you would have 100 requests per second and the test would finish in 30 seconds.

The fact that in 2nd case you have the same execution time indicates that your application cannot process incoming requests faster than 10 per second.

Try increasing ramp-up time for 2nd scenario and look into the following charts:

Normally when you increase the load the number of "Transactions Per Second" should increase by the same factor and "Response Time" should remain the same. Once response time starts growing and number of transactions per second starts decreasing it means that you passed the saturation point and discovered the bottleneck. You should report the point of maximum performance and investigate the reasons of the first bottleneck

More information: What is the Relationship Between Users and Hits Per Second?

1
votes

In scenario 2 after 10 seconds you have 100 concurrent users which execute requests in parallel, your server may not handle well or prevent such load

Concurrent user load testing sends simultaneous artificial traffic to a web application in order to stress the infrastructure and record system response times during periods of sustained heavy load.

In scenario 1 after 10 seconds you have 10 concurrent users looping through the flow, without causing a load on server

Notice your server may have restriction on number of users requesting only on specific request(s)

1
votes

We shall be very clear about the Rampup time Following is extract from the official documentation

enter image description here

Scenario 1 : no.of threads: 10 ramp up period: 1 loop count: 300

In the above scenario 10 threads(virtual users) are to be created in 1 seconds. Each user will loop 300 times. Hence there will be 3000 requests to the server. Throughput cannot be calculated in advance with above configuration. It fluctuates based on the server capability, network etc. You could control the throughput with some components and plugins.

Scenario 2 : no.of threads: 100 ramp up period: 10 loop count: 30

In scenario 2 100 threads (virtual users) are created in 10 seconds. 100 virtual users will send requests concurrently to the server. Each user will send 30 requests. In the second scenario you will have higher throughput (number of requests per seconds) as compared to the scenario 1. Looks like server cannot handle the 100 users sending requests concurrently.

Ramp up time is applicable for the first cycle of each thread. It will simulate delays between first request of each user in their first iteration.