Sorry, but "it depends" is the best answer here.
Firstly, the most valuable tool in answering this question is not ab or siege or JMeter (my favourite open source tool), it's a spreadsheet.
The number of requests your system can handle is determined by which bottleneck you hit first. Some of those bottlenecks will be hardware/infrastructure (bandwidth, CPU, the effectiveness of your load balancing scheme), some will be "off the shelf" software and the way it's configured (Apache's ability to serve static files, for instance), and software (how efficiently your PHP scripts and database queries run). Some of the bottleneck resources may not be under your control - most sites hosted in Europe or the US are slow when accessed from China, for instance.
I've used a spreadsheet to model user journeys - this depends entirely on your particular case, but a user journey might be:
- visit homepage
- click "register/log in" link
- register as new user
- click "verify" link from email
- access restricted content
Most sites support many user journeys - and at any one time, the mixture between those user journeys is likely to vary significantly.
For each user journey, I then assess the nature of the visitor requests - "visit homepage", for instance, might be "download 20 static files and 1 PHP script", while "register as new user" might require "1 PHP script", but with a fairly complex set of database scripts.
This process ends up as a set of rows in the spreadsheet showing the number of requests per type. For precision, it may be necessary to treat each dynamic page (PHP script) as it's own request, but I usually lump all the static assets together.
That gives you a baseline to test, based on a whole bunch of assumptions. You can now create load testing scripts representing "20 percent new users, 50 percent returning users, 10 percent homepage only, 20 percent complete purchase route, 20 percent abandon basket" or whatever user journeys you come up with.
Create a load testing script including the journeys and run it; ideally from multiple locations (there are several cheap ways to run Jmeter from cloud providers). Measure response times, and see where the response time of your slowest request exceeds your quality threshold (I usually recommend 3 seconds) in more than 10% of cases.
Try varying the split between user journeys - an advertising campaign might drive a lot of new registrations, for instance. I'd usually recommend at least 3 or 4 different mixtures.
If any of the variations in user journeys gives results that are significantly below the average (15% or more), that's probably your worst case scenario.
Otherwise, average the results, and you will know, with a reasonable degree of certainty, that this is the minimum number of requests you can support. The more variations in user journey you can test, the more certain it is that the number is accurate. By "minimum", I mean that you can be reasonably sure that you can manage at least this many users. It does not mean you can handle at most this many users - a subtle difference, but an important one!
In most web applications, the bottleneck is the dynamic page generation - there's relatively little point testing Apache's ability to serve static files, or your hosting provider's bandwidth. It's good as a "have we forgotten anything" test, but you'll get far more value out of testing your PHP scripts.
Before you even do this, I'd recommend playing "hunt the bottleneck" with just the PHP files - the process I've outlined above doesn't tell you where the bottleneck is, only that there is one. As it's most likely to be the PHP (and of course all the stuff you do from PHP, like calling a database), instrumenting the solution to test for performance is usually a good idea.
You should also use a tool like Yslow to make sure your HTTP/HTML set up is optimized - setting cache headers for your static assets will have a big impact on your bandwidth bill, and may help with performance as perceived by the end user. \