1
votes

I'm testing my Apache & PHP setup (default configuration on Ubuntu) with the `ab' tool. With 2 concurrent connections I get fairly satisfactory results:

ab -k -n 1000 -c 2 http://localserver/page.php

Requests per second:    184.81 [#/sec] (mean)
Time per request:       10.822 [ms] (mean)
Time per request:       5.411 [ms] (mean, across all concurrent requests)

Given it's a virtual machine with low memory, it's okay. Now I want to test a more realistic scenario: Requests spread among 100 users (read: connections) connected at the same time:

ab -k -n 1000 -c 100 http://localserver/page.php

Requests per second:    60.22 [#/sec] (mean)
Time per request:       1660.678 [ms] (mean)
Time per request:       16.607 [ms] (mean, across all concurrent requests)

This is much worse. While the number of requests per second overall has not fallen significantly (184 to 60 #/sec), the time per request from a user perspective has risen sharply (from 10 ms to over 1.6 seconds on average). The longest request took over 8 seconds, and manually connecting to the local server with a web browser took almost 10 seconds during the tests.

What could be the cause and how can I optimize the concurrency performance to an acceptable level?

(I'm using the default configuration as shipped with Ubuntu Linux Server.)

1
doesn't this first of all depend on what the local script is executing? could you use memcache(d)?Liam Sorsby
The local script is executing just a bunch of simple echo statements. I purposefully did not include any database work.JohnCand
If it's a low memory VM, why not use nginx, lighttp, or something else?Matt
What platform are you running this on? Unless it's an idle test machine, a dedicated server that you control, or a VM instance that is not using burstable resource limits your tests mean very little as they are almost entirely dependant on what else is using that hardware at the time. If this is running on something like an AWS micro instance [which I suspect it is] then you shouldn't expect anything other than horrid performance.Sammitch
@Sammitch It's running on a local VirtualBox instance on a Intel QuadCore with ~3GHz. The VM has 256 MB RAM and 1 CPU assigned. The host machine is more or less idle and has 8 GB RAM.JohnCand

1 Answers

3
votes

As a start, you need to look at the amount of memory each script consumes, ie. memory_limit and then divide the VMs memory by this. This should be the number of connections you can handle at the same time without running out of memory and therefore the server starting to do thrashing.

You will get to a very low amount of connections. So you need to

  • increase memory
  • decrease memory_limit
  • make each connection finish faster

The next step would be to see if any database queries take longer than expected, I usually start looking with the mysql-slow.log at queries longer than 0.5s Also eliminate queries that don't use indexes if you can.

Next after this install a monitoring tool like collectd and see if there is enough CPU available.

From a business perspective, it depends if this is a new website/system or something existing. If it is new and growth is dramatic, you need to overspend on hardware for a while, a system that doesn't work or crashes under traffic erodes the trust in a business very fast. On top of that, its usually not worth it to optimise a lot if the hosting bill is under $1000 per month. If that is not affordable, then you may need to go back to your business model.