119
votes

I was wondering if there was a way to get the number of results from a MySQL query, and at the same time limit the results.

The way pagination works (as I understand it), first I do something like

query = SELECT COUNT(*) FROM `table` WHERE `some_condition`

After I get the num_rows(query), I have the number of results. But then to actually limit my results, I have to do a second query like:

query2 = SELECT COUNT(*) FROM `table` WHERE `some_condition` LIMIT 0, 10

My question: Is there anyway to both retrieve the total number of results that would be given, AND limit the results returned in a single query? Or any more efficient way of doing this. Thanks!

9
Although you wouldn't have COUNT(*) in query2dlofrodloh

9 Answers

71
votes

I almost never do two queries.

Simply return one more row than is needed, only display 10 on the page, and if there are more than are displayed, display a "Next" button.

SELECT x, y, z FROM `table` WHERE `some_condition` LIMIT 0, 11
// iterate through and display 10 rows.

// if there were 11 rows, display a "Next" button.

Your query should return in an order of most relevant first. Chances are, most people aren't going to care about going to page 236 out of 412.

When you do a google search, and your results aren't on the first page, you likely go to page two, not nine.

69
votes

No, that's how many applications that want to paginate have to do it. It's reliable and bullet-proof, albeit it makes the query twice. But you can cache the count for a few seconds and that will help a lot.

The other way is to use SQL_CALC_FOUND_ROWS clause and then call SELECT FOUND_ROWS(). apart from the fact you have to put the FOUND_ROWS() call afterwards, there is a problem with this: There is a bug in MySQL that this tickles that affects ORDER BY queries making it much slower on large tables than the naive approach of two queries.

28
votes

Another approach to avoiding double-querying is to fetch all the rows for the current page using a LIMIT clause first, then only do a second COUNT(*) query if the maximum number of rows were retrieved.

In many applications, the most likely outcome will be that all of the results fit on one page, and having to do pagination is the exception rather than the norm. In these cases, the first query will not retrieve the maximum number of results.

For example, answers on a stackoverflow question rarely spill onto a second page. Comments on an answer rarely spill over the limit of 5 or so required to show them all.

So in these applications you can simply just do a query with a LIMIT first, and then as long as that limit is not reached, you know exactly how many rows there are without the need to do a second COUNT(*) query - which should cover the majority of situations.

15
votes

In most situations it is much faster and less resource intensive to do it in two separate queries than to do it in one, even though that seems counter-intuitive.

If you use SQL_CALC_FOUND_ROWS, then for large tables it makes your query much slower, significantly slower even than executing two queries, the first with a COUNT(*) and the second with a LIMIT. The reason for this is that SQL_CALC_FOUND_ROWS causes the LIMIT clause to be applied after fetching the rows instead of before, so it fetches the entire row for all possible results before applying the limits. This can't be satisfied by an index because it actually fetches the data.

If you take the two queries approach, the first one only fetching COUNT(*) and not actually fetching and actual data, this can be satisfied much more quickly because it can usually use indexes and doesn't have to fetch the actual row data for every row it looks at. Then, the second query only needs to look at the first $offset+$limit rows and then return.

This post from the MySQL performance blog explains this further:

http://www.mysqlperformanceblog.com/2007/08/28/to-sql_calc_found_rows-or-not-to-sql_calc_found_rows/

For more information on optimising pagination, check this post and this post.

6
votes

For anyone looking for an answer in 2020. As per MySQL documentation:

"The SQL_CALC_FOUND_ROWS query modifier and accompanying FOUND_ROWS() function are deprecated as of MySQL 8.0.17 and will be removed in a future MySQL version. As a replacement, considering executing your query with LIMIT, and then a second query with COUNT(*) and without LIMIT to determine whether there are additional rows."

I guess that settles that.

https://dev.mysql.com/doc/refman/8.0/en/information-functions.html#function_found-rows

2
votes

My answer may be late, but you can skip the second query (with the limit) and just filter the info through your back end script. In PHP for instance, you could do something like:

if($queryResult > 0) {
   $counter = 0;
   foreach($queryResult AS $result) {
       if($counter >= $startAt AND $counter < $numOfRows) {
            //do what you want here
       }
   $counter++;
   }
}

But of course, when you have thousands of records to consider, it becomes inefficient very fast. Pre-calculated count maybe a good idea to look into.

Here's a good read on the subject: http://www.percona.com/ppc2009/PPC2009_mysql_pagination.pdf

2
votes
query = SELECT col, col2, (SELECT COUNT(*) FROM `table`)/10 AS total FROM `table` WHERE `some_condition` LIMIT 0, 10

Where 10 is the page size and 0 is the page number (you need to use pageNumber-1 in the query)

0
votes

You can reuse most of the query in a subquery and set it to an identifier. For example a movie query that finds movies containing the letter 's' ordering by runtime would look like this on my site.

SELECT Movie.*, (
    SELECT Count(1) FROM Movie
        INNER JOIN MovieGenre 
        ON MovieGenre.MovieId = Movie.Id AND MovieGenre.GenreId = 11
    WHERE Title LIKE '%s%'
) AS Count FROM Movie 
    INNER JOIN MovieGenre 
    ON MovieGenre.MovieId = Movie.Id AND MovieGenre.GenreId = 11
WHERE Title LIKE '%s%' LIMIT 8;

Do note that I'm not a database expert, and am hoping someone will be able to optimize that a bit better. As it stands running it straight from the SQL command line interface they both take ~0.02 seconds on my laptop.

-15
votes
SELECT * 
FROM table 
WHERE some_condition 
ORDER BY RAND()
LIMIT 0, 10