48
votes

I need to move entire tables from one MySQL database to another. I don't have full access to the second one, only phpMyAdmin access. I can only upload (compressed) sql files smaller than 2MB. But the compressed output from a mysqldump of the first database's tables is larger than 10MB.

Is there a way to split the output from mysqldump into smaller files? I cannot use split(1) since I cannot cat(1) the files back on the remote server.

Or is there another solution I have missed?

Edit

The --extended-insert=FALSE option to mysqldump suggested by the first poster yields a .sql file that can then be split into importable files, provided that split(1) is called with a suitable --lines option. By trial and error I found that bzip2 compresses the .sql files by a factor of 20, so I needed to figure out how many lines of sql code correspond roughly to 40MB.

18
look at this Q in case you search how to split a large postgresql dump into smaller filesrubo77

18 Answers

44
votes

This bash script splits a dumpfile of one database into separate files for each table and names with csplit and names them accordingly:

#!/bin/bash

####
# Split MySQL dump SQL file into one file per table
# based on https://gist.github.com/jasny/1608062
####

#adjust this to your case:
START="/-- Table structure for table/"
# or 
#START="/DROP TABLE IF EXISTS/"


if [ $# -lt 1 ] || [[ $1 == "--help" ]] || [[ $1 == "-h" ]] ; then
        echo "USAGE: extract all tables:"
        echo " $0 DUMP_FILE"
        echo "extract one table:"
        echo " $0 DUMP_FILE [TABLE]"
        exit
fi

if [ $# -ge 2 ] ; then
        #extract one table $2
        csplit -s -ftable $1 "/-- Table structure for table/" "%-- Table structure for table \`$2\`%" "/-- Table structure for table/" "%40103 SET TIME_ZONE=@OLD_TIME_ZONE%1"
else
        #extract all tables
        csplit -s -ftable $1 "$START" {*}
fi
 
[ $? -eq 0 ] || exit
 
mv table00 head
 
FILE=`ls -1 table* | tail -n 1`
if [ $# -ge 2 ] ; then
        # cut off all other tables
        mv $FILE foot
else
        # cut off the end of each file
        csplit -b '%d' -s -f$FILE $FILE "/40103 SET TIME_ZONE=@OLD_TIME_ZONE/" {*}
        mv ${FILE}1 foot
fi
 
for FILE in `ls -1 table*`; do
        NAME=`head -n1 $FILE | cut -d$'\x60' -f2`
        cat head $FILE foot > "$NAME.sql"
done
 
rm head foot table*

based on https://gist.github.com/jasny/1608062
and https://stackoverflow.com/a/16840625/1069083

33
votes

First dump the schema (it surely fits in 2Mb, no?)

mysqldump -d --all-databases 

and restore it.

Afterwards dump only the data in separate insert statements, so you can split the files and restore them without having to concatenate them on the remote server

mysqldump --all-databases --extended-insert=FALSE --no-create-info=TRUE
14
votes

There is this excellent mysqldumpsplitter script which comes with tons of option for when it comes to extracting-from-mysqldump.

I would copy the recipe here to choose your case from:

1) Extract single database from mysqldump:

sh mysqldumpsplitter.sh --source filename --extract DB --match_str database-name

Above command will create sql for specified database from specified "filename" sql file and store it in compressed format to database-name.sql.gz.

2) Extract single table from mysqldump:

sh mysqldumpsplitter.sh --source filename --extract TABLE --match_str table-name

Above command will create sql for specified table from specified "filename" mysqldump file and store it in compressed format to database-name.sql.gz.

3) Extract tables matching regular expression from mysqldump:

sh mysqldumpsplitter.sh --source filename --extract REGEXP --match_str regular-expression

Above command will create sqls for tables matching specified regular expression from specified "filename" mysqldump file and store it in compressed format to individual table-name.sql.gz.

4) Extract all databases from mysqldump:

sh mysqldumpsplitter.sh --source filename --extract ALLDBS

Above command will extract all databases from specified "filename" mysqldump file and store it in compressed format to individual database-name.sql.gz.

5) Extract all table from mysqldump:

sh mysqldumpsplitter.sh --source filename --extract ALLTABLES

Above command will extract all tables from specified "filename" mysqldump file and store it in compressed format to individual table-name.sql.gz.

6) Extract list of tables from mysqldump:

sh mysqldumpsplitter.sh --source filename --extract REGEXP --match_str '(table1|table2|table3)'

Above command will extract tables from the specified "filename" mysqldump file and store them in compressed format to individual table-name.sql.gz.

7) Extract a database from compressed mysqldump:

sh mysqldumpsplitter.sh --source filename.sql.gz --extract DB --match_str 'dbname' --decompression gzip

Above command will decompress filename.sql.gz using gzip, extract database named "dbname" from "filename.sql.gz" & store it as out/dbname.sql.gz

8) Extract a database from compressed mysqldump in an uncompressed format:

sh mysqldumpsplitter.sh --source filename.sql.gz --extract DB --match_str 'dbname' --decompression gzip --compression none

Above command will decompress filename.sql.gz using gzip and extract database named "dbname" from "filename.sql.gz" & store it as plain sql out/dbname.sql

9) Extract alltables from mysqldump in different folder:

sh mysqldumpsplitter.sh --source filename --extract ALLTABLES --output_dir /path/to/extracts/

Above command will extract all tables from specified "filename" mysqldump file and extracts tables in compressed format to individual files, table-name.sql.gz stored under /path/to/extracts/. The script will create the folder /path/to/extracts/ if not exists.

10) Extract one or more tables from one database in a full-dump:

Consider you have a full dump with multiple databases and you want to extract few tables from one database.

Extract single database: sh mysqldumpsplitter.sh --source filename --extract DB --match_str DBNAME --compression none

Extract all tables sh mysqldumpsplitter.sh --source out/DBNAME.sql --extract REGEXP --match_str "(tbl1|tbl2)" though we can use another option to do this in single command as follows:

sh mysqldumpsplitter.sh --source filename --extract DBTABLE --match_str "DBNAME.(tbl1|tbl2)" --compression none

Above command will extract both tbl1 and tbl2 from DBNAME database in sql format under folder "out" in current directory.

You can extract single table as follows:

sh mysqldumpsplitter.sh --source filename --extract DBTABLE --match_str "DBNAME.(tbl1)" --compression none

11) Extract all tables from specific database:

mysqldumpsplitter.sh --source filename --extract DBTABLE --match_str "DBNAME.*" --compression none

Above command will extract all tables from DBNAME database in sql format and store it under "out" directory.

12) List content of the mysqldump file

mysqldumpsplitter.sh --source filename --desc

Above command will list databases and tables from the dump file.

You may later choose to load the files: zcat filename.sql.gz | mysql -uUSER -p -hHOSTNAME

  • Also once you extract single table which you think is still bigger, you can use linux split command with number of lines to further split the dump. split -l 10000 filename.sql

  • That said, if that is your need (coming more often), you might consider using mydumper which actually creates individual dumps you wont need to split!

11
votes

You say that you don't have access to the second server. But if you have shell access to the first server, where the tables are, you can split your dump by table:

for T in `mysql -N -B -e 'show tables from dbname'`; \
   do echo $T; \
   mysqldump [connecting_options] dbname $T \
   | gzip -c > dbname_$T.dump.gz ; \
   done

This will create a gzip file for each table.

Another way of splitting the output of mysqldump in separate files is using the --tab option.

mysqldump [connecting options] --tab=directory_name dbname 

where directory_name is the name of an empty directory. This command creates a .sql file for each table, containing the CREATE TABLE statement, and a .txt file, containing the data, to be restored using LOAD DATA INFILE. I am not sure if phpMyAdmin can handle these files with your particular restriction, though.

10
votes

Late reply but was looking for same solution and came across following code from below website:

for I in $(mysql -e 'show databases' -s --skip-column-names); do mysqldump $I | gzip > "$I.sql.gz"; done

http://www.commandlinefu.com/commands/view/2916/backup-all-mysql-databases-to-individual-files

4
votes

I've recently created sqlsplit.com. Try it out.

3
votes

I wrote a new version of the SQLDumpSplitter, this time with a proper parser, allowing nice things like INSERTs with many values to be split over files and it is multi platform now: https://philiplb.de/sqldumpsplitter3/

2
votes

You don't need ssh access to either of your servers. Just a mysql[dump] client is fine. With the mysql[dump], you can dump your database and import it again.

In your PC, you can do something like:

$ mysqldump -u originaluser -poriginalpassword -h originalhost originaldatabase | mysql -u newuser -pnewpassword -h newhost newdatabase

and you're done. :-)

hope this helps

2
votes

You can split existent file by AWK. It's very quik and simple

Let's split table dump by 'tables' :

cat dump.sql | awk 'BEGIN {output = "comments"; }
$data ~ /^CREATE TABLE/ {close(output); output = substr($3,2,length($3)-2); }
{ print $data >> output }';

Or you can split dump by 'database'

cat backup.sql | awk 'BEGIN {output="comments";} $data ~ /Current Database/ {close(output);output=$4;} {print $data>>output}';
1
votes

You can dump individual tables with mysqldump by running mysqldump database table1 table2 ... tableN

If none of the tables are too large, that will be enough. Otherwise, you'll have to start splitting the data in the larger tables.

1
votes

i would recommend the utility bigdump, you can grab it here. http://www.ozerov.de/bigdump.php this staggers the execution of the dump, in as close as it can manage to your limit, executing whole lines at a time.

1
votes

I wrote a Python script to split a single large sql dump file into separate files, one for each CREATE TABLE statement. It writes the files to a new folder that you specify. If no output folder is specified, it creates a new folder with the same name as the dump file, in the same directory. It works line-by-line, without writing the file to memory first, so it is great for large files.

https://github.com/kloddant/split_sql_dump_file

import sys, re, os

if sys.version_info[0] < 3:
    raise Exception("""Must be using Python 3.  Try running "C:\\Program Files (x86)\\Python37-32\\python.exe" split_sql_dump_file.py""")

sqldump_path = input("Enter the path to the sql dump file: ")

if not os.path.exists(sqldump_path):
    raise Exception("Invalid sql dump path.  {sqldump_path} does not exist.".format(sqldump_path=sqldump_path))

output_folder_path = input("Enter the path to the output folder: ") or sqldump_path.rstrip('.sql')

if not os.path.exists(output_folder_path):
    os.makedirs(output_folder_path)

table_name = None
output_file_path = None
smallfile = None

with open(sqldump_path, 'rb') as bigfile:
    for line_number, line in enumerate(bigfile):
        line_string = line.decode("utf-8")
        if 'CREATE TABLE' in line_string.upper():
            match = re.match(r"^CREATE TABLE (?:IF NOT EXISTS )?`(?P<table>\w+)` \($", line_string)
            if match:
                table_name = match.group('table')
                print(table_name)
                output_file_path = "{output_folder_path}/{table_name}.sql".format(output_folder_path=output_folder_path.rstrip('/'), table_name=table_name)
                if smallfile:
                    smallfile.close()
                smallfile = open(output_file_path, 'wb')
        if not table_name:
            continue
        smallfile.write(line)
    smallfile.close()
0
votes

Try csplit(1) to cut up the output into the individual tables based on regular expressions (matching the table boundary I would think).

0
votes

This script should do it:

#!/bin/sh

#edit these
USER=""
PASSWORD=""
MYSQLDIR="/path/to/backupdir"

MYSQLDUMP="/usr/bin/mysqldump"
MYSQL="/usr/bin/mysql"

echo - Dumping tables for each DB
databases=`$MYSQL --user=$USER --password=$PASSWORD -e "SHOW DATABASES;" | grep -Ev "(Database|information_schema)"`
for db in $databases; do
    echo - Creating "$db" DB
    mkdir $MYSQLDIR/$db
    chmod -R 777 $MYSQLDIR/$db
    for tb in `$MYSQL  --user=$USER --password=$PASSWORD -N -B -e "use $db ;show tables"`
        do 
            echo -- Creating table $tb
            $MYSQLDUMP --opt  --delayed-insert --insert-ignore --user=$USER --password=$PASSWORD $db $tb | bzip2 -c > $MYSQLDIR/$db/$tb.sql.bz2
    done
    echo
done
0
votes

Check out SQLDumpSplitter 2, I just used it to split a 40MB dump with success. You can get it at the link below:

sqldumpsplitter.com

Hope this help.

0
votes

I've created MySQLDumpSplitter.java which, unlike bash scripts, works on Windows. It's available here https://github.com/Verace/MySQLDumpSplitter.

0
votes

A clarification on the answer of @Vérace :

I specially like the interactive method; you can split a large file in Eclipse. I have tried a 105GB file in Windows successfully:

Just add the MySQLDumpSplitter library to your project: http://dl.bintray.com/verace/MySQLDumpSplitter/jar/

Quick note on how to import:

- In Eclipse, Right click on your project --> Import
- Select "File System" and then "Next"
- Browse the path of the jar file and press "Ok"
- Select (thick) the "MySQLDumpSplitter.jar" file and then "Finish"
- It will be added to your project and shown in the project folder in Package Explorer in Eclipse
- Double click on the jar file in Eclipse (in Package Explorer)
- The "MySQL Dump file splitter" window opens which you can specify the address of your dump file and proceed with split.
0
votes

Try this: https://github.com/shenli/mysqldump-hugetable It will dump data into many small files. Each file contains less or equal MAX_RECORDS records. You can set this parameter in env.sh.