1
votes

I'm getting the following results, where the throughput does not have a change, even when I increase the number of threads.

Scenario#1:

Number of threads: 10

Ramp-up period: 60

Throughput: 5.8/s

Avg: 4025

Scenario#2:

Number of threads: 20

Ramp-up period: 60

Throughput: 7.8/s

Avg: 5098

Scenario#3:

Number of threads: 40

Ramp-up period: 60

Throughput: 6.8/s

Avg: 4098

The my JMeter file consists of a single ThreadGroup that contains a single GET.

When I perform the request for an endpoit where the response time faster (less than 300 ms) I can achieve throughput greater than 50 requests per seconds.

Can you see the bottleneck of this?

Is there a relationship between response time and throughput?

2
Are there any errors,especially is Scenario#3?user7294900

2 Answers

1
votes

It's simple as JMeter user manual states:

Throughput = (number of requests) / (total time)

Now assuming your test contains only a single GET then Throughput will be correlate average response time of your requests.

Notice Ramp-up period: 60 will start to create threads over 1 minute, so it will add to total time of execution, you can try to reduce it to 10 or equal to Number of threads.

But you may have other sampler/controllers/component that may effect total time.

Also in your case especially in Scenario 3, maybe some requests failed then you are not calculating Throughput of successful transactions.

1
votes

In ideal world if you increase number of threads by factor of 2x - throughput should increase by the same factor.

In reality the "ideal" scenario is hardly achievable so it looks like a bottleneck in your application. The process of identifying the bottleneck normally looks as follows:

  • Amend your test configuration to increase the load gradually so i.e. start with 1 virtual user and increase the load to i.e. 100 virtual users in 5 minutes
  • Run your test and look into Active Threads Over Time, Response Times Over Time and Server Hits Per Second listeners. This way you will be able to correlate increasing load with increasing response time and identify the point where performance starts degrading. See What is the Relationship Between Users and Hits Per Second? for more information
  • Once you figure out what is the saturation point you need to know what prevents your application from from serving more requests, the reasons could be in:

    • Application simply lacks resources (CPU, RAM, Network, Disk, etc.), make sure to monitor the aforementioned resources, this could be done using i.e JMeter PerfMon Plugin
    • The infrastructure configuration is not suitable for high loads (i.e. application or database thread pool settings incorrect)
    • The problem is in your application code (inefficient algorithm, large objects, slow DB queries). These items can be fetched using a profiler tool
    • Also make sure you're following JMeter Best Practices as it might be the case JMeter is not capable of sending requests fast enough due to either lack of resources on JMeter load generator side or incorrect JMeter configuration (too low heap, running test in GUI mode, using listeners, etc)