1
votes

I am using locust for load testing on an api. I am sending POST request is on that api. My USE CASE is that I want to check how many requests can the api take at a time. This is my code

import requests
from locust import HttpLocust, TaskSet
from requests.auth import HTTPDigestAuth
from credentials import *


class UserBehavior(TaskSet):
    def on_start(self):
        if len(USER_CREDENTIALS) > 0:
                self.name, self.password,self.email,self.phone,self.country_abbrev = USER_CREDENTIALS.pop()
    @task
    def registration(self):
        URL = "ip/user/register"
        PARAMS = {'name':self.name,'password': self.password,'primary_email': self.email,'primary_mobile_number':self.phone,'country_abbrev':self.country_abbrev} 
        self.client.post(url = URL,params = PARAMS,auth=HTTPDigestAuth('user', 'pass'))

class WebsiteUser(HttpLocust):
    task_set = UserBehavior
    min_wait = 5000
    max_wait = 9000


And then I run locust -f locust.py --host=localhost:8089

Then I get an option to set hatch rate and number of users. What should I set the values to if I just want to send 100 post requests concurrently once to check if the API can handle 100 requests at one time.

Any help would be appreciated

2
What does the @task mean ?Mahesh Kumaran

2 Answers

2
votes

Locust has two parameters. Hatch Rate and Number of Users. For your case please select: Number of Users as 100 and hatch rate as 100.

This means that you want to spawn 100 users at the rate of 100 users per seconds. So that solves your use-case. Hope this helps!.

0
votes

what I understood is you want to find the breaking point of your server.

how many requests your server can handle or capable enough to handle.

you can gradually increase no or user and hatch rate once you find an error in API response or some case server may crash.

you can try:

heatchrate    ----> User request
-r            ----> -c

10            ----> 100
50            ----> 500
100            ----> 1000
500            ----> 5000