0
votes

We have written test script with following details

Number of threads(Users): 400 Ramp up time : 480 seconds (8 minutes) Script running time: 900 seconds (15 minutes)

Tree structure of script is

ThreadGroup
|---Request1
|---Request2
|---Request3
|---Request4
|---Request5
|---Constant timer(5 seconds)

Now my expectation out of this script between each http request sample there should be a delay of 5 seconds. But this is not how it seems to be working. I am noticing that between each Request types thats Request 1 and Request 2 it adds delay of 5 seconds and not necessarily between each request samples

For example right now what is happening is

Request 1 sample 1
Request 1 sample 2
\\Run for 5 Seconds
Request 2 sample 1
Request 2 sample 2

The output I am looking for is

Request1 sample 1
5 seconds delay
Request 1 sample 2
5 seconds delay
Request 2 sample 1
5 seconds delay
Request 3 sample 1

Am I doing something wrong here. I have searched Google and Stackoverflow but I am not getting the exact scenario depicted as I want.

1

1 Answers

0
votes

Given the settings for JMeter that you have provided above your current output looks correct.

Perhaps the confusion here is around the exact workings of the ramp-up period and the Constant Timer as in this case these should be the only things effecting the order of execution.

The Apache JMeter site actually puts the workings of the ramp-up period best:

The ramp-up period tells JMeter how long to take to "ramp-up" to the full number of threads chosen. If 10 threads are used, and the ramp-up period is 100 seconds, then JMeter will take 100 seconds to get all 10 threads up and running. Each thread will start 10 (100/10) seconds after the previous thread was begun. If there are 30 threads and a ramp-up period of 120 seconds, then each successive thread will be delayed by 4 seconds. (https://jmeter.apache.org/usermanual/test_plan.html)

In addition, a Constant Timer provides a way for you to space out individual steps in your test plan. Importantly this is only inside each thread.

So effectively, you have a ramp up period that means a new thread is starting roughly every second. Inside each thread each request is delayed by 5 seconds each. This gives us an output roughly along the lines of:

  1. (Start) Thread 1 starts - Request 1 executes (pauses for 5 seconds)
  2. (1 sec) Thread 2 starts - Request 1 executes (pauses for 5 seconds)
  3. (2 sec) Thread 3 starts - Request 1 executes (pauses for 5 seconds)
  4. (3 sec) Thread 4 starts - Request 1 executes (pauses for 5 seconds)
  5. (4 sec) Thread 5 starts - Request 1 executes (pauses for 5 seconds)
  6. (5 sec) Thread 6 starts - Request 1 executes (pauses for 5 seconds) + Thread 1 executes Request 2.

As you can see it isn't until much later, after a block of the first request occurring, that your second requests start occurring, much along the line of the output you are seeing.

From what I understand of your question, you only ever want one request to be occurring every 5 seconds across all threads. To achieve this look at the Constant Throughput Timer. The Constant Throughput Timer has a setting that lets you share its timer across 'All Active Threads' so that you create a constant load on a server.

In order to get the order of execution at the start correct play about with the ramp-up period.