1
votes

I am trying to test a webpage using multiple remotes. The performance results of the webserver vary on jusing jmeter client (jmeter master).

I am testing in non-gui mode just with one remote slave. But I found, that I have different results with the same jmeter remote when using different master.

The slave node is dedicated server Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz with 32GB RAM (10GB dedicated for jmeter).

When I am using jmeter master on virtual machine with 2CPU Intel(R) Core(TM)2 Duo CPU T7700 @2.40GHz, 3.7GB RAM from the same provider as the slave node is from, the testing results of my webpage is only 50 transactions/s.

When I switched the jmeter-master to google cloud (n1-standard-1 machine with 1 CPU Intel(R) Xeon(R) CPU @ 2.50GHz and 3.75GB RAM) and using the same slave node, the result is 130 transactions/s.

The jmeter master setup is in both cases the same. I really have no clue, why this results are different. From my understanding the jmeter master (client) is only collecting the results from remote slaves and the traffic is generated from remote slave, so the results should be the same.

1

1 Answers

0
votes

You are definitely hitting the limits of your local slave, I would suggest measuring the OS-level metrics, i.e. usage of CPU, RAM, Swap, Network and Disk, Java Heap, Java Garbage collections, etc.

You can do it either using built-in tools or consider using JMeter PerfMon plugin which allows monitoring of more than 70 metrics, it should allow to identify the bottleneck which in this case could be connected with JMeter.

See How to Monitor Your Server Health & Performance During a JMeter Load Test article for plugin setup, configuration and usage instructions.