A table containing huge amount of data where data insertion rate is almost 5 rows per second. I am fetching data from this table with infinite pagination using limit and offset along with left joins in descending order of created_date(it stores the insertion timstamp).
So , by taking the time into consideration it happens to fetch duplicate data from the table.
Suppose , currently I have 1000 data , for :
Expected Output:
total records: 1000
1st fetch: limit: 10, offset: 0 (Expected: 1000, 999, 998, .... 991)
2nd fetch: limit : 10, offset: 10 (Expected: 990,...,981)
3rd fetch: limit : 10, offset: 20 (Expected: 980,..., 971)
Actual data :
total records: 1000
1st fetch: limit: 10, offset: 0 (Actual: 1000, 999, 998, .... 991)
total records: 1005
2nd fetch: limit : 10, offset: 10 (Actual: 995,...,986)
Repeated record: 995,994,993,992,991
total records: 1012
3rd fetch: limit : 10, offset: 20 (Actual: 992,..., 983)
Repeated record: 992,991,990,989,988,987,986
Is there any locking for the current requests or procedure in mysql to fetch the data properly without adding another where clause like record id greater than the first fetched like that ?
If any further info is required for the solution/queries, please comment.
My query is:
SELECT `tab_a`.*, `tab_b`.`likes`, `tab_b`.`comment`, `tab_b`.`share`
FROM `tab_a`
LEFT JOIN `tab_b` ON `tab_a`.`id` = `tab_b`.`post_id`
WHERE post_position IN (?) AND (post_date BETWEEN ? AND ?)
GROUP BY `tab_a`.`id` ORDER BY `tab_a`.`id` DESC, `tab_b`.`created_date` DESC
LIMIT 9 OFFSET 0