I have a very special use case. I have a copy of my production database (MySQL 5.5.29, InnoDB) on a separate VM (which I just threw 8GB of ram onto) and need a result on just one query, which is taking a very long time.
This is the query:
select *
from bew
inner join bewa on bewa.id = bew.id
inner join sa on bew.sa_id = sa.id
inner join revinfo ri on bewa.rev = ri.id
where sa.vp_id = 6
and bewa.prio is not null
and bewa.rev > 0
and bew.person_id > 0
order by bew.person_id, bewa.rev, bewa.prio
These are the column types:
bew.id bigint(20) [PRIMARY KEY, NOT NULL]
bew.person_id bigint(20) [(FOREIGN) KEY, NOT NULL - index: fk_bew_person26]
bew.sa_id bigint(20) [(FOREIGN) KEY, NOT NULL - index: fk_bew_sa27]
bewa.id bigint(20) [NOT NULL (bewa.id, bewa.rev) = PRIMARY KEY]
bewa.rev bigint(20) [NOT NULL (bewa.id, bewa.rev) = PRIMARY KEY, (FOREIGN) KEY - index: fk_bew_aud_revinfo28]
bewa.prio int(11) [index: tmp_idx_bewa_prio]
ri.id bigint(20) [PRIMARY KEY, NOT NULL]
sa.id bigint(20) [PRIMARY KEY, NOT NULL]
sa.vp_id bigint(20) [(FOREIGN) KEY, NOT NULL - index: fk_sa_vp141]
All columns above are indexed either as PRIMARY KEY or as single-column indexes. There are no compound indexes so far (I have tried a few but the plan did not change so I dropped them again).
Also here's some numbers:
select count(*) from sa where sa.vp_id = 6;
> 179
select count(*) from bew;
> 240808
select count(*) from bew
inner join sa on bew.sa_id = sa.id
where sa.vp_id = 6 and bew.person_id > 0;
> 199955
select count(*) from bewa
> 5126493
select count(*)
from bew
inner join bewa on bewa.id = bew.id
inner join sa on bew.sa_id = sa.id
inner join revinfo ri on bewa.rev = ri.id
where sa.vp_id = 6
and bewa.prio is not null
and bewa.rev > 0
and bew.person_id > 0;
> (still counting after 10 min ... :( )
This is the explain output:
*************************** 1. row *************************** id: 1 select_type: SIMPLE table: bew type: ALL possible_keys: PRIMARY,fk_bew_person26,fk_bew_sa27 key: NULL key_len: NULL ref: NULL rows: 241594 Extra: Using where; Using temporary; Using filesort *************************** 2. row *************************** id: 1 select_type: SIMPLE table: sa type: eq_ref possible_keys: PRIMARY,fk_sa_vp141 key: PRIMARY key_len: 8 ref: bew.sa_id rows: 1 Extra: Using where *************************** 3. row *************************** id: 1 select_type: SIMPLE table: bewa type: ref possible_keys: PRIMARY,fk_bew_aud_revinfo28,tmp_idx_bewa_prio key: PRIMARY key_len: 8 ref: bew.id rows: 6 Extra: Using where *************************** 4. row *************************** id: 1 select_type: SIMPLE table: ri type: eq_ref possible_keys: PRIMARY key: PRIMARY key_len: 8 ref: bewa.REV rows: 1 Extra:
There's not much I can do to actually optimize the SQL (it's quite an amount of data and I need it sorted...) and since I only need to run it once that does not really matter, anyway.
So just take this query as an example for my actual, broader question:
How can I tweak the settings in my my.cnf
file for such (reporting) queries to run faster than they are now? Actually, what I want is to give the single session as much memory as possible to perform a single query. I don't need things like query caches or good multi-user performance. Given about 6GB of free memory, how can I configure the mysql server to use as much of it as possible to compute one single query?
sa.vp_id
? – Marcus Adams