0
votes

I have a 4 Vertx mock apis behind an Nginx. While Executing a jmeter load test with 250 users, the result is same for either 1 vertx node or more. e.g :- with 1 Vertx node (0 sec latency) - 995 tps and with all 4 nodes the result is the same. How can I improve tps by increaing the backends ? p.s when I put a timer to create a back end latency the tps drops significantly (950--> 180) . Is this due to the error in my code ?

Server - Linux 64 , Jmeter instance 3.0 with 250 users/125 rampup

//---Vertx mock service ---------------------------
public class App extends AbstractVerticle {

    private static Logger LOGGER = Logger.getLogger("InfoLogging");
    public static void main(String[] args) {
        Vertx vertx = Vertx.vertx();
 PropertyConfigurator.configure(System.getProperty("user.dir")+"/log4j.properties");

        HttpServer httpServer = vertx.createHttpServer();
        Router router = Router.router(vertx);

            Route ELKPaymentResponse = router
                    .post("/:param/amount")
                    .produces("application/json")
                    .handler(routingContext -> {
                          routingContext.request().bodyHandler(bodyHandler -> {
                        HttpServerResponse response = routingContext.response();
                     //   response.setChunked(true);
                        String JsonResponse ="{  
              //Mock service here
}";

                        vertx.setTimer(TimeUnit.SECONDS.toMillis(1), l -> {
                         JsonObject json = new JsonObject(JsonResponse);
                             response.putHeader("Content-Type", "application/json; charset=UTF8")
                             .setStatusCode(200)
                             .end(Json.encodePrettily(json));
                         });
                          }); 
                   });
1
upstream test { server 127.0.0.1:8090; server 127.0.0.1:8091; # server 127.0.0.1:8092; # server 127.0.0.1:8093; } server { listen 8290; server_name localhost; location / { proxy_pass myproject; } }Damith Liyanaarachchi

1 Answers

0
votes

What is the resource utilization (cpu, mem, network, connections, threads, etc) of your JMeter server during the 995 tps test? What are the resource utilization numbers like for the single Vertx node during the 995 tps test?

Make sure that JMeter is not being the bottleneck. Verify that the single vertex server is handling traffic without bottlenecks. The single vertx server resource utilization at 995 TPS would give you a clue to how much tps a single server can handle. Only then move on to multiple vertx.

250 threads is pretty low for a 995 TPS test run. I prefer to set a target TPS with something like the throughput shaping timer. I use a larger amount of JMeter threads to make sure my pacing per thread is larger than the response time. This way tps is not being determined simply by response time. Without pacing, you have an out of control firehose and it can be hard to manage any repeatable numbers and find bottlenecks on the system under test.