75
votes

So I can export only a table like this:

mysqldump -u root -p db_name table_name > table_name.sql

Is there any way to export only a portion of a table with mysqldump? For example, 0 - 1,000,000 rows, 1,000,000 - 2,000,000 rows, etc.

Should I do this with mysqldump or a query?

7

7 Answers

151
votes
mysqldump -uroot -p db_name table_name --where='id<1000000'

or you can use

SELECT * INTO OUTFILE 'data_path.sql' from table where id<100000
5
votes
mysqldump --skip-triggers --compact --no-create-info --user=USER --password=PASSWORD -B DATABASE --tables MY_TABLE --where='SOME_COLUMN>=xxxx' > out.sql
2
votes

The file dumped is different from the file you use SQL select. For the 2nd approach, you can not simply use: mysql database < table to dump the table into a database.

0
votes

In my case i have execute this:

SELECT * 
  INTO OUTFILE 'C:\Documents and Settings\Anton.Zarkov\Desktop\joomla_export\data_AZ.sql'
  FROM `jos_glossary`
 WHERE id>6000
  • there is no syntax error - the query passes through.
    1. The result is NULL - no rows were written. (I'm sure - the last ID is 6458)
    2. If I repeat the query an error occurs => #1086 - File 'C:Documents and SettingsAnton.ZarkovDesktopjoomla_exportdata_AZ.sql' already exists
    3. Unfortunately I cannot find the "existing" file anywhere on disk C. Where is it?

The conditions are: phpMyAdmin SQL Dump; version 3.4.5; host: localhost; server version: 5.5.16; PHP version: 5.3.8

0
votes
mysqldump -uroot -p db_name table_name --where'id<1000000' > yourdumpname.sql
0
votes

Below query is to select from id range you can use date_created or any instead of id

mysqldump --opt --host=dns --user=user_name --password=your_passwd db_name --tables table_name  --where "id > 1 and id < 100 " > /file_name.sql

ex: --where="date_created > '2019-01-18' " --> insted of id

-1
votes

The question is current as ever, most people will find these sort of questions because they suffer from the single-threaded design of mysql and mysqldump.
If you have millions or billions of rows exporting can take days (to weeks) so you end up only exporting parts of the data instead.

A quick hack to solve this is to export portions of the data, this works best if you have a numeric key (like an autoincrement id).
Below is a linux/unix example on how to export a table rougly 20-100 times faster than normal.

Assumed column "id" is from 1 to 10000000
Assumed cpu has 16 threads
Assumed disk is an ssd or nvme
seq 0 1000 | xargs -n1 -P16 -I{} | mysqldump -h localhost --password=PASSWORD --single-transaction DATABASE TABLE --where "id > {}*10000 AND id < {}*10000+10000" -r output.{}

The above code will run 16 threads, roughly cutting time to export to 1/10 of normal. It creates 16 files that also can be loaded in parallel which speeds up loading up to 10 times.
On a strong server I use up to 150 parallel threads, this depends on the type of disk and cpu you are running.
This method, a bit refined, can cut the loading or export of a 1 week export to a few hours.

When doing this over network --compress can help a lot, also ignore insert statements will help with faulty mysql indexes that are not avoidable on large data. loading data with 'mysql -f' further helps to avoid stopping in such cases.

P.S. never use the mysql options to add indexes and keys at the end on large tables.